Scipy Sparse Matrices

Sparse Matrices are very nice in some situations.

For example, in some machine learning tasks, especially those associated with textual analysis, the data may be mostly zeros.

Storing all these zeros is very inefficient.

We can create and manipulate sparse matrices as follows:


In [1]:
import numpy as np

In [2]:
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print(X)


[[0.52508939 0.55969684 0.38059541 0.14994033 0.3561533 ]
 [0.94612104 0.20796991 0.18345058 0.03266521 0.71642811]
 [0.76801146 0.18143891 0.44346617 0.3509763  0.70771478]
 [0.96785438 0.64010409 0.20666769 0.99005094 0.42858088]
 [0.24971981 0.88585392 0.1683662  0.70119483 0.48374682]
 [0.01736319 0.87369042 0.19830546 0.56395574 0.20060824]
 [0.11881578 0.65524562 0.21570217 0.02114718 0.8527528 ]
 [0.7722977  0.44208694 0.01126588 0.80556187 0.07607147]
 [0.75409907 0.78761663 0.41863968 0.30373673 0.63332945]
 [0.99874432 0.37336682 0.14359151 0.76142434 0.1988419 ]]

In [4]:
X[X < 0.7] = 0  # note: fancy indexing
print(X)


[[0.         0.         0.         0.         0.        ]
 [0.94612104 0.         0.         0.         0.71642811]
 [0.76801146 0.         0.         0.         0.70771478]
 [0.96785438 0.         0.         0.99005094 0.        ]
 [0.         0.88585392 0.         0.70119483 0.        ]
 [0.         0.87369042 0.         0.         0.        ]
 [0.         0.         0.         0.         0.8527528 ]
 [0.7722977  0.         0.         0.80556187 0.        ]
 [0.75409907 0.78761663 0.         0.         0.        ]
 [0.99874432 0.         0.         0.76142434 0.        ]]

In [5]:
from scipy import sparse

# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)


  (1, 0)	0.9461210440608149
  (1, 4)	0.7164281142304602
  (2, 0)	0.7680114556976801
  (2, 4)	0.7077147754658187
  (3, 0)	0.9678543752795629
  (3, 3)	0.9900509407165115
  (4, 1)	0.8858539179438214
  (4, 3)	0.7011948276939008
  (5, 1)	0.8736904234085155
  (6, 4)	0.8527528049269587
  (7, 0)	0.7722977020522017
  (7, 3)	0.8055618728634483
  (8, 0)	0.7540990714791828
  (8, 1)	0.7876166309534933
  (9, 0)	0.9987443167367364
  (9, 3)	0.7614243372618548

In [6]:
# convert the sparse matrix to a dense array
print(X_csr.toarray())


[[0.         0.         0.         0.         0.        ]
 [0.94612104 0.         0.         0.         0.71642811]
 [0.76801146 0.         0.         0.         0.70771478]
 [0.96785438 0.         0.         0.99005094 0.        ]
 [0.         0.88585392 0.         0.70119483 0.        ]
 [0.         0.87369042 0.         0.         0.        ]
 [0.         0.         0.         0.         0.8527528 ]
 [0.7722977  0.         0.         0.80556187 0.        ]
 [0.75409907 0.78761663 0.         0.         0.        ]
 [0.99874432 0.         0.         0.76142434 0.        ]]

In [7]:
# Sparse matrices support linear algebra:
y = np.random.random(X_csr.shape[1])
z1 = X_csr.dot(y)
z2 = X.dot(y)
np.allclose(z1, z2)


Out[7]:
True
  • The CSR representation can be very efficient for computations, but it is not as good for adding elements.

  • For that, the LIL (List-In-List) representation is better:


In [8]:
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))

for i, j in np.random.randint(0, 5, (15, 2)):
    X_lil[i, j] = i + j

print(X_lil)
print(X_lil.toarray())


  (0, 1)	1.0
  (0, 2)	2.0
  (1, 1)	2.0
  (1, 3)	4.0
  (2, 0)	2.0
  (2, 1)	3.0
  (2, 2)	4.0
  (2, 3)	5.0
  (3, 0)	3.0
  (4, 0)	4.0
  (4, 1)	5.0
  (4, 2)	6.0
[[0. 1. 2. 0. 0.]
 [0. 2. 0. 4. 0.]
 [2. 3. 4. 5. 0.]
 [3. 0. 0. 0. 0.]
 [4. 5. 6. 0. 0.]]
  • Often, once an LIL matrix is created, it is useful to convert it to a CSR format
    • Note: many scikit-learn algorithms require CSR or CSC format

In [9]:
X_csr = X_lil.tocsr()
print(X_csr)


  (0, 1)	1.0
  (0, 2)	2.0
  (1, 1)	2.0
  (1, 3)	4.0
  (2, 0)	2.0
  (2, 1)	3.0
  (2, 2)	4.0
  (2, 3)	5.0
  (3, 0)	3.0
  (4, 0)	4.0
  (4, 1)	5.0
  (4, 2)	6.0

There are several other sparse formats that can be useful for various problems:

  • CSC (compressed sparse column)
  • BSR (block sparse row)
  • COO (coordinate)
  • DIA (diagonal)
  • DOK (dictionary of keys)

CSC - Compressed Sparse Column

Advantages of the CSC format

* efficient arithmetic operations CSC + CSC, CSC * CSC, etc.
* efficient column slicing
* fast matrix vector products (CSR, BSR may be faster)

Disadvantages of the CSC format

* slow row slicing operations (consider CSR)
* changes to the sparsity structure are expensive (consider LIL or DOK)

BSR - Block Sparse Row

The Block Compressed Row (BSR) format is very similar to the Compressed Sparse Row (CSR) format.

BSR is appropriate for sparse matrices with dense sub matrices like the example below.

Block matrices often arise in vector-valued finite element discretizations.

In such cases, BSR is considerably more efficient than CSR and CSC for many sparse arithmetic operations.


In [10]:
from scipy.sparse import bsr_matrix

indptr = np.array([0, 2, 3, 6])
indices = np.array([0, 2, 2, 0, 1, 2])
data = np.array([1, 2, 3, 4, 5, 6]).repeat(4).reshape(6, 2, 2)
bsr_matrix((data,indices,indptr), shape=(6, 6)).toarray()


Out[10]:
array([[1, 1, 0, 0, 2, 2],
       [1, 1, 0, 0, 2, 2],
       [0, 0, 0, 0, 3, 3],
       [0, 0, 0, 0, 3, 3],
       [4, 4, 5, 5, 6, 6],
       [4, 4, 5, 5, 6, 6]])

COO - Coordinate Sparse Matrix

Advantages of the CSC format

* facilitates fast conversion among sparse formats
* permits duplicate entries (see example)
* very fast conversion to and from CSR/CSC formats

Disadvantages of the CSC format

* does not directly support arithmetic operations and slicing

Intended Usage

* COO is a fast format for constructing sparse matrices
* Once a matrix has been constructed, convert to CSR or CSC format for fast arithmetic and matrix vector
operations
* By default when converting to CSR or CSC format, duplicate (i,j) entries will be summed together. 
This facilitates efficient construction of finite element matrices and the like.

DOK - Dictionary of Keys

Sparse matrices can be used in arithmetic operations: they support addition, subtraction, multiplication, division, and matrix power.

Allows for efficient O(1) access of individual elements. Duplicates are not allowed. Can be efficiently converted to a coo_matrix once constructed.


In [11]:
from scipy.sparse import dok_matrix
S = dok_matrix((5, 5), dtype=np.float32)
for i in range(5):
    for j in range(i, 5):
        S[i,j] = i+j
        
S.toarray()


Out[11]:
array([[0., 1., 2., 3., 4.],
       [0., 2., 3., 4., 5.],
       [0., 0., 4., 5., 6.],
       [0., 0., 0., 6., 7.],
       [0., 0., 0., 0., 8.]], dtype=float32)

The scipy.sparse submodule also has a lot of functions for sparse matrices including linear algebra, sparse solvers, graph algorithms, and much more.