Sparse Matrices are very nice in some situations.
For example, in some machine learning tasks, especially those associated with textual analysis, the data may be mostly zeros.
Storing all these zeros is very inefficient.
We can create and manipulate sparse matrices as follows:
In [1]:
import numpy as np
In [2]:
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print(X)
In [4]:
X[X < 0.7] = 0 # note: fancy indexing
print(X)
In [5]:
from scipy import sparse
# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
In [6]:
# convert the sparse matrix to a dense array
print(X_csr.toarray())
In [7]:
# Sparse matrices support linear algebra:
y = np.random.random(X_csr.shape[1])
z1 = X_csr.dot(y)
z2 = X.dot(y)
np.allclose(z1, z2)
Out[7]:
The CSR representation can be very efficient for computations, but it is not as good for adding elements.
For that, the LIL (List-In-List) representation is better:
In [8]:
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(X_lil.toarray())
In [9]:
X_csr = X_lil.tocsr()
print(X_csr)
There are several other sparse formats that can be useful for various problems:
CSC
(compressed sparse column)BSR
(block sparse row)COO
(coordinate)DIA
(diagonal)DOK
(dictionary of keys)Advantages of the CSC format
* efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
* efficient column slicing
* fast matrix vector products (CSR, BSR may be faster)
Disadvantages of the CSC format
* slow row slicing operations (consider CSR)
* changes to the sparsity structure are expensive (consider LIL or DOK)
The Block Compressed Row (BSR
) format is very similar to the Compressed Sparse Row (CSR
) format.
BSR is appropriate for sparse matrices with dense sub matrices like the example below.
Block matrices often arise in vector-valued finite element discretizations.
In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations.
In [10]:
from scipy.sparse import bsr_matrix
indptr = np.array([0, 2, 3, 6])
indices = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, 2, 2)
bsr_matrix((data,indices,indptr), shape=(6, 6)).toarray()
Out[10]:
Advantages of the CSC format
* facilitates fast conversion among sparse formats
* permits duplicate entries (see example)
* very fast conversion to and from CSR/CSC formats
Disadvantages of the CSC format
* does not directly support arithmetic operations and slicing
Intended Usage
* COO is a fast format for constructing sparse matrices
* Once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector
operations
* By default when converting to CSR or CSC format, duplicate (i,j) entries will be summed together.
This facilitates efficient construction of finite element matrices and the like.
Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, division, and matrix power.
Allows for efficient O(1) access of individual elements. Duplicates are not allowed. Can be efficiently converted to a coo_matrix once constructed.
In [11]:
from scipy.sparse import dok_matrix
S = dok_matrix((5, 5), dtype=np.float32)
for i in range(5):
for j in range(i, 5):
S[i,j] = i+j
S.toarray()
Out[11]:
The scipy.sparse
submodule also has a lot of functions for sparse matrices
including linear algebra, sparse solvers, graph algorithms, and much more.