I have a very specific sparse matrix layout and I am looking for storage recommendations.
The matrices I consider
- are symmetric and positive definite
- made up of block matrices (all blocks have the same size)
- every block is a “KiteMatrix”, i.e. it has a dense upper left corner (the kite) and a single element repeated over the lower right diagonal (the string).
But the “kites” are of varying sizes. Specifically the layout is as follows (with only
the lower triangular part, since the upper part is given by symmetry):
These are the operations I want to perform
- The matrix is expanded incrementally by one row of blocks.
- In every step I need the cholesky decomposition of the entire matrix and I can retain the cholesky decomposition between steps such that I only need to expand it by a block row whenever the matrix is extended. In fact I intend to only save the cholesky decomposition.
The overall storage complexity should thus be
sum_{k=2}^{n+1} (k-1)*k * k = O(n^4)
and the computation complexity of the cholesky decomposition should be O(n^6)
. In particular the complexity should be independent of d
The dimension of the blocks d
may be extremely large, it is therefore very desirable to
store the diagonal “string” parts as a single number.
My current plan is to store the blocks in a list of lists format (does scipy.sparse.lil_array support block matrices?) and the blocks themselves as a class consisting of a dense numpy block and a single number for the diagonal.
But I am unsure if this is the best approach. I am open to better alternatives
2
The overall storage complexity should thus be
sum_{k=2}^{n+1} (k-1)*k * k = O(n^4)
CSR or CSC fulfills this. I wrote a quick program to calculate memory use for any n or d. (For square matrices, CSR and CSC have the same memory use.)
import numpy as np
import matplotlib.pyplot as plt
def calc_element_count_for_block(d, i):
elements = 0
blocksize = i + 2
# Add block
elements += blocksize ** 2
# Add tail
elements += d
# Subtract intersection of tail and block
elements -= blocksize
return elements
def calc_element_count_for_matrix(n, d):
elements = 0
for i in range(n):
num_blocks = 2 * i + 1
elements += calc_element_count_for_block(d, i) * num_blocks
return elements
def calc_byte_count_for_matrix(n, d):
bytes_per_element_for_indices = 8
bytes_per_element_for_indices_for_data = 8
bytes_per_element = bytes_per_element_for_indices + bytes_per_element_for_indices_for_data
indptr_bytes = bytes_per_element_for_indices * n * d
return calc_element_count_for_matrix(n, d) * bytes_per_element + indptr_bytes
N = 100
x = np.arange(1, N)
plt.plot(x, [calc_byte_count_for_matrix(n, N) for n in x], label="bytes ")
plt.plot(x, 8*x**4, label="O(n^4)")
plt.legend()
plt.xlabel("n")
plt.ylabel("bytes of memory used")
Memory use goes up as approximately O(n4).
Let’s discuss some assumptions:
- This graph is for a fixed d value of 100.
- I assumed that you would use 16 bytes per element. This is assuming that the matrix uses int64 for indices, and float64 for data elements. If you could use int32 and float32, that would go down to 8 bytes per element.
and the computation complexity of the cholesky decomposition should be
O(n^6)
. In particular the complexity should be independent of d
Here, there’s a problem, because SciPy doesn’t have a cholesky decomposition which operates on sparse matrices. It has scipy.linalg.cholesky
, but that operates on dense matrices. You could convert to dense, but that would take more memory and be slower.
You could write your own cholesky decomposition solver. I don’t know enough about Cholesky decomposition to say how difficult that would be.
You could use the sksparse.cholmod Cholesky decomposition. This would force you to use CSC format – if you pick another, it will convert into CSC format, which will eliminate any memory savings from another format.
You could look into some other matrix factorization from the sparse module.
My current plan is to store the blocks in a list of lists format (does scipy.sparse.lil_array support block matrices?) and the blocks themselves as a class consisting of a dense numpy block and a single number for the diagonal.
Two thoughts:
- You will save at most 50% of your memory this way versus CSC. CSC stores an index and a data value for every element of your matrix. You can avoid storing the index value with this scheme, but not the data value.
- If you store a matrix of object dtype, with each element being your class storing a block matrix, then there is no point in the outer matrix being sparse. Sparse matrices where every element is filled take more space than equivalent dense matrices. It would be better in that context to use a dense NumPy matrix of object dtype as the outer matrix.
1