There are different ways to generalize SVD to tensors

- CP-format: $A(i_1, \ldots,i_d) \approx \sum_{\alpha=1}^r U_1(i_1, \alpha) \ldots U_d(i_d, \alpha)$
- Subspace approaches (Tucker format, HT-format, TT-format), that represent a "good" manifold as an intersection of "low-rank" manifolds in $\mathbb{R}^{n^d}$

- We can do basic linear algebra (add, multiply, compute norms)
- We can do full to TT compression
- We can do "rounding" (reapproximate a given tensor with another with guaranteed complexity)
- We can do cross approximation of the tensor
- It is all in the open-source software (TT-Toolbox in MATLAB and Python)

**Option 1**: Do ordinary method (preconditioned Krylov) with tensor arithmetics,
$X_{k+1} = P(F(X_k))$

**Option 2**: Reformulate as minimization problem, $(Ax, x) - 2(f, x) \rightarrow \min$ and replace the minimization by the minimization over the manifold.

**Minimization over the low-rank manifold is crucially important!**

(Lubich, Koch, H.-D. Meyer,...)

$$\begin{split} &\frac{dU}{dt} S = \frac{dA}{dt} V,\\ &\frac{dV}{dt} S^{\top} = \frac{dA^{\top}}{dt} U, \\ &\frac{dS}{dt} = U^{\top} \frac{dA}{dt} V. \end{split}$$Suppose, you start from rank-$1$ and evolve into rank-$10$ case: the **matrix $S$ will be singular!**

**Theorem** (O., Lubich, 2014): The equations with $P_1$, $P_2$, $P_3$ can be efficiently integrated

**Step "L":**$$\frac{dX}{dt} = UU^{\top}\frac{dA}{dt} \rightarrow \frac{d(VS^{\top})}{dt} = \frac{dA^{\top}}{dt} U, \frac{dU}{dt}=0,$$

Thus $L_1 = L_0 + (A_1 - A_0)^{\top} U_0, U_1 = U_0$.

**Step "K":**$$\frac{dX}{dt} = \frac{dA}{dt} VV^{\top} \rightarrow \frac{d(US)}{dt} = \frac{dA}{dt} V, \frac{dV}{dt}=0,$$ Thus $K_1 = K_0 + (A_1 - A_0) V_0, V_1 = V_0$**Step "S":**$$\frac{dX}{dt} = UU^{\top}\frac{dA}{dt} VV^{\top} \rightarrow \frac{d(S)}{dt} = U\frac{dA}{dt} V, \frac{dV}{dt}=0, \frac{dU}{dt} = 0,$$

Thus $S_1 = S_0 \mathbf{-} U^{\top}_0(A_1 - A_0) V_0, V_1 = V_0, U_1 = U_0$

Here $A_0 = A(t), A_1 = A(t + \tau)$

```
In [2]:
```import numpy as np
n = 5
r = 2
m = 5
def random_usv(n, m, r):
u = np.random.randn(n, r)
u = np.linalg.qr(u)[0]
v = np.random.randn(m, r)
v = np.linalg.qr(v)[0]
s = np.random.randn(r, r)
return u, s, v
u0, s0, v0 = random_usv(n, m, r)
u1, s1, v1 = random_usv(n, m, r)
#Generate A0, A1 from the manifold
A0 = u0.dot(s0).dot(v0.T)
A1 = u1.dot(s1).dot(v1.T)
u = u0.copy(); v = v0.copy(); s = s0.copy()
# K-step
K = u.dot(s); K = K + (A1 - A0).dot(v)
u, s = np.linalg.qr(K)
# S-step
s = s - u.T.dot(A1 - A0).dot(v)
# L-step
L = v.dot(s.T); L = L + (A1 - A0).T.dot(u)
v, s = np.linalg.qr(L)
s = s.T
Appr = u.dot(s).dot(v.T)
print 'Error:', np.linalg.norm(A1 - Appr)

```
```

$ \matP_{\le k} = \matQ_{\le k}\matQ_{\le k}^\top, \qquad\hbox{and}\qquad \matP_{\ge k} = \matQ_{\ge k}\matQ_{\ge k}^\top, $

Lubich, Vandreycken, O., SINUM, 2015 - accepted.

The simplest "geometrical method" for the optimization over the low-rank manifold is the projected gradient iteration:

$$ X_{k+1} = R(X_k + P (\mathrm{grad}(F))), $$where $P$ is the projection onto the tangent space, $R$ brings the iterate backs to the manifold. Suggestion: use KSL for the retraction. The gradient method often has slow convergence.

There are several second-order methods; a simpler way is a Gauss-Newton type method. Suppose our initial point is $x_0$, and the vector on the tangent space can be parametrized as $Qp$.

Then, consider the next step as a minimization of the target functional over the full tangent space.

For quadratic functionals (i.e., tensor completion, linear systems, eigenvalue problems)

we have $$Q^{\top} A Q p = Q^{\top} f$$

as the system for $p$. For the TT-case, vector $p$ consists of $\delta G_k$, so the system is meant to be solved iteratively (and preconditioned by the ALS method). The matrix-by-vector product has a simple meaning.

Preliminary implementation for the tensor completion shows that it works well (and ALS may not always work well!).

We compare the TT-KSL scheme with the MCTDH package for the benchmark problem with Henon-Heiles potential \begin{equation} \frac{d\psi}{dt} = i H \psi, \quad \psi(0) = \psi_0, \end{equation} where $H$ has the form \begin{equation} \def\Hlap{-\frac12 \Delta} \def\Hhar{\frac12 \sum_{k=1}^f q^2_k} \def\Hanh{\sum_{k=1}^{f-1}\left(q^2_k q_{k+1} - \frac13 q^3_{k+1} \right)} H = \Hlap + \underbrace{\Hhar + \overbrace{\lambda \Hanh}^{\textrm{anharmonic part}}}_{\textrm{Henon-Heiles potential}~V(q_1,\ldots,q_f)}. \end{equation}

The optimal $U$ can be recovered from the leading eigenvectors of the solution of the Lyapunov equation: $$ A X + XA^* = -y_0 y^*_0, $$

$$ X \approx U Z U^{\top}. $$In http://arxiv.org/abs/1410.3335 (D. Kolesnikov, O.) we have proposed to use $$ F(U) = \int^{\infty}_0 \Vert y - U e^{Bt} U^{*} y_0 \Vert^2 dt, $$

where $B = U^* A U$ as the functional for low-rank solution of the Lyapunov equation.

This functional is computable, given an unitary $U$!

The final **alternating low-rank method** is very simple to implement:

where $w$ is the next Arnoldi vector, and $v_k$ is the rational Krylov vector

$$v_k = (A + \lambda_k I)^{-1} y_0,$$and $\lambda_k$ is computed as

$$ \lambda_k = q^* B_k q, $$where $q$ is the last column of the solution $Z$ of the "small" Lyapunov equation $$ B_k Z + ZB_k^* = c_0 c^*_0$$

My webpage: http://oseledets.github.io

**Software**

- TT-Toolbox: https://github.com/oseledets/ttpy (Python)
- https://github.com/oseledets/TT-Toolbox (MATLAB)

**Papers**

- I. V. Oseledets. Tensor-train decomposition. SIAM J. Sci. Comput., 33(5):2295–2317, 2011
- D. A. Kolesnikov and I. V. Oseledets. From low-rank approximation to an efficient rational Krylov subspace method for the Lyapunov equation. arXiv preprint 1410.3335, 2014
- Christian Lubich and Ivan V. Oseledets. A projector-splitting integrator for dynamical low-rank approximation. BIT, 54(1):171–188, 2014
- Christian Lubich, Ivan Oseledets, and Bart Vandereycken. Time integration of tensor trains. arXiv preprint 1407.2042, 2014.
- Jutho Haegeman, Christian Lubich, Ivan Oseledets, Bart Vandereycken, and Frank Verstraete. Unifying time evolution and optimization with matrix product states. arXiv preprint 1408.5056, 2014.
- Grasedyck, L. and Kressner, D. and Tobler, C. A literature survey of low-rank tensor approximation techniques.

- http://oseledets.github.io - group website, papers, etc.
- http://github.com/oseledets/TT-Toolbox
- http://github.com/oseledets/ttpy

```
In [19]:
```from IPython.core.display import HTML
def css_styling():
styles = open("../common/custom.css", "r").read()
return HTML(styles)
css_styling()

```
Out[19]:
```