In Numpy (or many numerical computation cases), there are two ways to handle multiplication:
In [1]:
import numpy as np
In Numpy by defualt is C-style (row-first) continguous.
In [10]:
matA = np.array([[1, 2], [3, 4]], dtype=np.int)
matB = np.array([[11, 12], [13, 14]], dtype=np.int)
print('matrix A: \n%r' % matA)
print('matrix B: \n%r' % matB)
In [11]:
matA * matB
Out[11]:
In [13]:
np.multiply(matA, matB)
Out[13]:
In [17]:
matA.__mul__(matB)
Out[17]:
In [18]:
matA @ matB
Out[18]:
In [15]:
np.dot(matA, matB)
Out[15]:
In [16]:
matA.dot(matB)
Out[16]:
Becuase np.dot
is too common.
(Example from PEP 465) Try to write the following equation in Numpy.
$$ S = (H\beta -r )^\top(HVH^\top)^{-1}(H\beta -r) $$import numpy as np
from numpy.linalg import inv, solve
# Using dot function:
S = np.dot(
(np.dot(H, beta) - r).T,
np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r)
)
# Using dot method:
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)
But with @
we have,
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
Further optimize the computation,
# Version 1 (as above)
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)
# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv(H @ V @ H.T) @ trans_coef
# Version 3
S = trans_coef.T @ solve(H @ V @ H.T, trans_coef)
doc/neps/return-of-revenge-of-matmul-pep.rst
In [ ]: