You were invited to Piazza, where you can find announcement on the course project. In case you didn't get an invitation to your @skoltech.ru email from Piazza, ask TA to set you up there.
In [ ]:
Our next goal is to process this signal by multiplying it by a special type of matrix (convolution operation) that will smooth the signal.
signal[::p]
. gen_toeplitz
($N,\alpha$) that outputs matrix $T$: $$T_{ij} = \sqrt{\frac{\alpha}{\pi}}e^{-\alpha (i-j)^2}, \quad i,j=1,\dots,N$$ as numpy array. Avoid using loops or lists! The function scipy.linalg.toeplitz might be helpful for this task.
Note: matrices that depend only on difference of indices: $T_{ij} \equiv T_{i-j}$ are called Toeplitz. Toeplitz matrix-by-vector multiplication is convolution since it can be written as $y_i = \sum_{j=1}^N T_{i-j} x_j$. Convolutions can be computed faster than $\mathcal{O}(N^2)$ complexity using Fast Fourier transform (will be covered later in our course, no need to implement it here).plt.subplots
in matplotlib. Each subplot should contain first $100$ points of initial and convolved signals for some $\alpha$. Make sure that you got results that look like smoothed initial signal.your_array = your_array.astype(np.int16)
wavPlayer
.
Note that too small signal can not be played.Given convolved signal $y$ and initial signal $x$ our goal now is to recover $x$ by solving the system $$ y = Tx. $$ To do so we will run iterative process $$ x_{k+1} = x_{k} - \tau_k (Tx_k - y), \quad k=1,2,\dots $$ starting from zero vector $x_0$. There are different ways how to define parameters $\tau_k$. Different choices lead to different methods (e.g. Richardson iteration, Chebyshev iteration, etc.). This topic will be covered in details later in our course.
To get some intuition why this process converges to the solution of $Tx=y$ we can have the following considerations. Let us note that if $x_k$ converges to some limit $x$, then so does $x_{k+1}$ and taking $k\to \infty$ we arrive at $x = x - \tau (Tx - y)$ and hence $x$ is the solution of $Tx = y$.
Another important point is that iterative process require only matrix-vector porducts $Tx_k$ on each iteration instead of the whole matrix. In this problem we, however, work with the full matrix, but keep in mind, that convolution can be done efficiently without storing the whole matrix.
For each $k$ choose paremeter $\tau_k$ such that the residual $r_{k+1}=Tx_{k+1} - y$ is minimal possible (line search with search direction $r_k$): $$ \|Tx_{k+1} - y\|_2 \to \min_{\tau_k} $$ found analytically. The answer to this bullet is derivation of $\tau_k$. $\tau_k$ should be expressed in terms of residuals $r_k = T x_k - y$.
Write a function iterative(N, num_iter, y,
$\alpha$) that outputs accuracy numpy array of relative errors $\big\{\frac{\|x_{k+1} - x\|_2}{\|x\|_2}\big\}$ after num_iter
iterations using $\tau_k$ from the previous task. Set num_iter=1000
, x=s[::20]
and do a convergece plot for $\alpha = \frac{1}{2}$ and $\alpha = \frac{1}{5}$. Note: The only loop you are allowed to use here is a loop for $k$.
Set x=s[::20]
, num_iter=1000
and $\alpha=\frac{1}{5}$. Explain what happens with the convergence if you add small random noise of amplitude $10^{-3}\max(x)$ to $y$. The answer to this question should be an explanation supported by plots and/or tables.
In [ ]:
In [ ]:
It might be a good idea not to do recursion in the Strassen algorithm to the bottom level. Sometimes only several levels of the recursion are used. This helps to reduce a constant outside $n^3$.
In [ ]:
plt.subplots
and locate images as a $2\times 2$ array.
In [ ]:
HOSVD(n, epsilon)
that calculates High-Order SVD (HOSVD) algorithm in 3D and returns core and 3 HOSVD factors (epsilon
is the relative accuracy in the Frobenius norm between approximated and initial tensors). Give ranks of the 3D Hilbert tensor
$$
a_{ijk} = \frac{1}{i+j+k + 1}, \quad i,j,k = 0,\dots, n-1
$$
with $10^{-5}$ accuracy. Details can be found here on Figure 4.3.
In [ ]: