You've done this before, but you'll do it once more: write a function named dot_product
which takes two 1D NumPy arrays (vectors) and computes their pairwise dot product. The function will take two arguments: two 1D NumPy arrays. It will return one floating-point number: the dot product.
Recall that the dot product is a sum of products: the corresponding elements of two vectors are multiplied with each other, then all those products are summed up into one final number.
For example, dot_product([1, 2, 3], [4, 5, 6])
will perform the operation:
(1 * 4) + (2 * 5) + (3 * 6) = 4 + 10 + 18 = 32
so dot_product([1, 2, 3], [4, 5, 6])
should return 32
.
If the vectors are two different lengths, this function should return None
.
You can use numpy
for arrays and the numpy.sum
function, but no others (especially not the numpy.dot
function which does exactly this).
In [ ]:
In [ ]:
try:
dot_product
except:
assert False
else:
assert True
In [ ]:
import numpy as np
np.random.seed(56985)
x = np.random.random(48)
y = np.random.random(48)
np.testing.assert_allclose(14.012537210130272, dot_product(x, y))
x = np.random.random(48)
y = np.random.random(49)
assert dot_product(x, y) is None
Write a function mv_multiply
which takes a 2D NumPy matrix as the first argument, and a 1D NumPy vector as the second, and multiplies them together, returning the resulting vector.
This function will specifically perform the operation $\vec{y} = A * \vec{x}$, where $A$ and $\vec{x}$ are the function arguments. Remember how to perform this multiplication:
First, you need to check that the number of columns of $A$ is the same as the length of $\vec{x}$. If not, you should print an error message and return None
.
Second, you'll compute the dot product of each row of $A$ with the entire vector $\vec{x}$.
Third, the result of the dot product from the $i^{th}$ row of $A$ will go in the $i^{th}$ element of the solution vector, $\vec{y}$. Therefore, $\vec{y}$ will have the same number of elements as rows of $A$.
You can use numpy
for arrays, and your dot_product
function from Part A, but no other functions.
In [ ]:
In [ ]:
try:
mv_multiply
except:
assert False
else:
assert True
In [ ]:
import numpy as np
np.random.seed(487543)
A = np.random.random((92, 458))
v = np.random.random(458)
np.testing.assert_allclose(mv_multiply(A, v), np.dot(A, v))
In [ ]:
import numpy as np
np.random.seed(49589)
A = np.random.random((83, 75))
v = np.random.random(83)
assert mv_multiply(A, v) is None
Write a function mm_multiply
which takes two 2D NumPy matrices as arguments, and returns their matrix product.
This function will perform the operation $Z = X \times Y$, where $X$ and $Y$ are the function arguments. Remember how to perform matrix-matrix multiplication:
First, you need to make sure the matrix dimensions line up. For computing $X \times Y$, this means the number of columns of $X$ (first matrix) should match the number of rows of $Y$ (second matrix). These are referred to as the "inner dimensions"--matrix dimensions are usually cited as "rows by columns", so the second dimension of the first operand $X$ is on the "inside" of the operation; same with the first dimension of the second operand, $Y$. If the operation were instead $Y \times X$, you would need to make sure that the number of columns of $Y$ matches the number of rows of $X$. If these numbers don't match, you should return None
.
Second, you'll need to create your output matrix, $Z$. The dimensions of this matrix will be the "outer dimensions" of the two operands: if we're computing $X \times Y$, then $Z$'s dimensions will have the same number of rows as $X$ (the first matrix), and the same number of columns as $Y$ (the second matrix).
Third, you'll need to compute pairwise dot products. If the operation is $X \times Y$, then these dot products will be between the $i^{th}$ row of $X$ with the $j^{th}$ column of $Y$. This resulting dot product will then go in Z[i][j]
. So first, you'll find the dot product of row 0 of $X$ with column 0 of $Y$, and put that in Z[0][0]
. Then you'll find the dot product of row 0 of $X$ with column 1 of $Y$, and put that in Z[0][1]
. And so on, until all rows and columns of $X$ and $Y$ have been dot-product-ed with each other.
You can use numpy
, but no functions associated with computing matrix products (and definitely not the @
operator).
Hint: you can make use of your mv_multiply
and/or dot_product
methods from previous questions to help simplify your code.
In [ ]:
In [ ]:
try:
mm_multiply
except:
assert False
else:
assert True
In [ ]:
import numpy as np
np.random.seed(489547)
A = np.random.random((48, 683))
B = np.random.random((683, 58))
np.testing.assert_allclose(mm_multiply(A, B), A @ B)
A = np.random.random((359, 45))
B = np.random.random((83, 495))
assert mm_multiply(A, B) is None
In [ ]:
import numpy as np
np.random.seed(466525)
A = np.random.random((58, 683))
B = np.random.random((683, 58))
np.testing.assert_allclose(mm_multiply(B, A), B @ A)