PEP 465 - @ operator

A dedicated infix operator for matrix multiplication

$$ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \\ 13 & 14 \end{bmatrix} = \text{?} $$

In Numpy (or many numerical computation cases), there are two ways to handle multiplication:

  • elementwise
  • matrix

Elementwise Multiplication

$$ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \\ 13 & 14 \end{bmatrix} = \begin{bmatrix} 1 \cdot 11 & 2 \cdot 12\\ 3 \cdot 13 & 4 \cdot 14 \end{bmatrix} $$

Matrix Multiplication

$$ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \times \begin{bmatrix} 11 & 12 \\ 13 & 14 \end{bmatrix} = \begin{bmatrix} 1 \cdot 11 + 2 \cdot 13 & 1 \cdot 12 + 2 \cdot 14\\ 3 \cdot 11 + 4 \cdot 13 & 3 \cdot 12 + 4 \cdot 14\\ \end{bmatrix} $$

In [1]:
import numpy as np

In Numpy by defualt is C-style (row-first) continguous.


In [10]:
matA = np.array([[1, 2], [3, 4]], dtype=np.int)
matB = np.array([[11, 12], [13, 14]], dtype=np.int)

print('matrix A: \n%r' % matA)
print('matrix B: \n%r' % matB)


matrix A: 
array([[1, 2],
       [3, 4]])
matrix B: 
array([[11, 12],
       [13, 14]])

Elementwise


In [11]:
matA * matB


Out[11]:
array([[11, 24],
       [39, 56]])

In [13]:
np.multiply(matA, matB)


Out[13]:
array([[11, 24],
       [39, 56]])

In [17]:
matA.__mul__(matB)


Out[17]:
array([[11, 24],
       [39, 56]])

Matrix Multiplication


In [18]:
matA @ matB


Out[18]:
array([[37, 40],
       [85, 92]])

In [15]:
np.dot(matA, matB)


Out[15]:
array([[37, 40],
       [85, 92]])

In [16]:
matA.dot(matB)


Out[16]:
array([[37, 40],
       [85, 92]])

Why @ operator?

Becuase np.dot is too common.

(Example from PEP 465) Try to write the following equation in Numpy.

$$ S = (H\beta -r )^\top(HVH^\top)^{-1}(H\beta -r) $$
import numpy as np
from numpy.linalg import inv, solve

# Using dot function:
S = np.dot(
    (np.dot(H, beta) - r).T,
    np.dot(inv(np.dot(np.dot(H, V), H.T)), np.dot(H, beta) - r)
)

# Using dot method:
S = (H.dot(beta) - r).T.dot(inv(H.dot(V).dot(H.T))).dot(H.dot(beta) - r)

But with @ we have,

S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

Further optimize the computation,

# Version 1 (as above)
S = (H @ beta - r).T @ inv(H @ V @ H.T) @ (H @ beta - r)

# Version 2
trans_coef = H @ beta - r
S = trans_coef.T @ inv(H @ V @ H.T) @ trans_coef

# Version 3
S = trans_coef.T @ solve(H @ V @ H.T, trans_coef)

doc/neps/return-of-revenge-of-matmul-pep.rst


In [ ]: