# Quick start

In this quick starting guide we show the basics of working with t3f library. The main concept of the library is a TensorTrain object -- a compact (factorized) representation of a tensor (=multidimensional array). This is generalization of the matrix low-rank decomposition.

To begin, let's import some libraries.

``````

In [1]:

import numpy as np

# Import TF 2.
%tensorflow_version 2.x
import tensorflow as tf

# Fix seed so that the results are reproducable.
tf.random.set_seed(0)
np.random.seed(0)
try:
import t3f
except ImportError:
# Install T3F if it's not already installed.
!git clone https://github.com/Bihaqo/t3f.git
!cd t3f; pip install .
import t3f

``````
``````

TensorFlow 2.x selected.

``````

## Converting to and from TT-format

Let's start with converting a dense (numpy) matrix into the TT-format, which in this case coincides with the low-rank format.

``````

In [2]:

# Generate a random dense matrix of size 3 x 4.
a_dense = np.random.randn(3, 4)
# Convert the matrix into the TT-format with TT-rank = 3 (the larger the TT-rank,
# the more exactly the tensor will be converted, but the more memory and time
# everything will take). For matrices, matrix rank coinsides with TT-rank.
a_tt = t3f.to_tt_tensor(a_dense, max_tt_rank=3)
# a_tt stores the factorized representation of the matrix, namely it stores the matrix
# as a product of two smaller matrices which are called TT-cores. You can
# access the TT-cores directly.
print('factors of the matrix: ', a_tt.tt_cores)
# To check that the convertions into the TT-format didn't change the matrix too much,
# let's convert it back and compare to the original.
reconstructed_matrix = t3f.full(a_tt)
print('Original matrix: ')
print(a_dense)
print('Reconstructed matrix: ')
print(reconstructed_matrix)

``````
``````

factors of the matrix:  (<tf.Tensor: shape=(1, 3, 3), dtype=float64, numpy=
array([[[-0.86358906, -0.23239721,  0.44744327],
[-0.42523249,  0.81253763, -0.3986978 ],
[-0.27090823, -0.53457847, -0.80052145]]])>, <tf.Tensor: shape=(3, 4, 1), dtype=float64, numpy=
array([[[-2.2895998 ],
[-0.04123559],
[-1.28825847],
[-2.2648235 ]],

[[ 1.16267886],
[-1.10656759],
[ 0.46752401],
[-1.42118407]],

[[ 0.12735099],
[ 0.23999328],
[-0.05617841],
[-0.10115877]]])>)
Original matrix:
[[ 1.76405235  0.40015721  0.97873798  2.2408932 ]
[ 1.86755799 -0.97727788  0.95008842 -0.15135721]
[-0.10321885  0.4105985   0.14404357  1.45427351]]
Reconstructed matrix:
tf.Tensor(
[[ 1.76405235  0.40015721  0.97873798  2.2408932 ]
[ 1.86755799 -0.97727788  0.95008842 -0.15135721]
[-0.10321885  0.4105985   0.14404357  1.45427351]], shape=(3, 4), dtype=float64)

``````

The same idea applies to tensors

``````

In [3]:

# Generate a random dense tensor of size 3 x 2 x 2.
a_dense = np.random.randn(3, 2, 2).astype(np.float32)
# Convert the tensor into the TT-format with TT-rank = 3.
a_tt = t3f.to_tt_tensor(a_dense, max_tt_rank=3)
# The 3 TT-cores are available in a_tt.tt_cores.
# To check that the convertions into the TT-format didn't change the tensor too much,
# let's convert it back and compare to the original.
reconstructed_tensor = t3f.full(a_tt)
print('The difference between the original tensor and the reconsrtucted '
'one is %f' % np.linalg.norm(reconstructed_tensor - a_dense))

``````
``````

The difference between the original tensor and the reconsrtucted one is 0.000002

``````

## Arithmetic operations

T3F is a library of different operations that can be applied to the tensors in the TT-format by working directly with the compact representation, i.e. without the need to materialize the tensors themself. Here are some basic examples

``````

In [4]:

# Create a random tensor of shape (3, 2, 2) directly in the TT-format
# (in contrast to generating a dense tensor and then converting it to TT).
b_tt = t3f.random_tensor((3, 2, 2), tt_rank=2)
# Compute the Frobenius norm of the tensor.
norm = t3f.frobenius_norm(b_tt)
print('Frobenius norm of the tensor is %f' % norm)
# Compute the TT-representation of the sum or elementwise product of two TT-tensors.
sum_tt = a_tt + b_tt
prod_tt = a_tt * b_tt
twice_a_tt = 2 * a_tt
# Most operations on TT-tensors increase the TT-rank. After applying a sequence of
# operations the TT-rank can increase by too much and we may want to reduce it.
# To do that there is a rounding operation, which finds the tensor that is of
# a smaller rank but is as close to the original one as possible.
rounded_prod_tt = t3f.round(prod_tt, max_tt_rank=3)
a_max_tt_rank = np.max(a_tt.get_tt_ranks())
b_max_tt_rank = np.max(b_tt.get_tt_ranks())
exact_prod_max_tt_rank = np.max(prod_tt.get_tt_ranks())
rounded_prod_max_tt_rank = np.max(rounded_prod_tt.get_tt_ranks())
difference = t3f.frobenius_norm(prod_tt - rounded_prod_tt)
print('The TT-ranks of a and b are %d and %d. The TT-rank '
'of their elementwise product is %d. The TT-rank of '
'their product after rounding is %d. The difference '
'between the exact and the rounded elementwise '
'product is %f.' % (a_max_tt_rank, b_max_tt_rank,
exact_prod_max_tt_rank,
rounded_prod_max_tt_rank,
difference))

``````
``````

Frobenius norm of the tensor is 2.943432
The TT-ranks of a and b are 3 and 2. The TT-rank of their elementwise product is 6. The TT-rank of their product after rounding is 3. The difference between the exact and the rounded elementwise product is 0.003162.

``````

## Working with TT-matrices

Recall that for 2-dimensional tensors the TT-format coincides with the matrix low-rank format. However, sometimes matrices can have full matrix rank, but some tensor structure (for example a kronecker product of matrices). In this case there is a special object called Matrix TT-format. You can think of it as a sum of kronecker products (although it's a bit more complicated than that).

Let's say that you have a matrix of size 8 x 27. You can convert it into the matrix TT-format of tensor shape (2, 2, 2) x (3, 3, 3) (in which case the matrix will be represented with 3 TT-cores) or, for example, into the matrix TT-format of tensor shape (4, 2) x (3, 9) (in which case the matrix will be represented with 2 TT-cores).

``````

In [5]:

a_dense = np.random.rand(8, 27).astype(np.float32)
a_matrix_tt = t3f.to_tt_matrix(a_dense, shape=((2, 2, 2), (3, 3, 3)), max_tt_rank=4)
# Now you can work with 'a_matrix_tt' like with any other TT-object, e.g.
print('Frobenius norm of the matrix is %f' % t3f.frobenius_norm(a_matrix_tt))
twice_a_matrix_tt = 2.0 * a_matrix_tt  # multiplication by a number.
prod_tt = a_matrix_tt * a_matrix_tt  # Elementwise product of two TT-matrices.

``````
``````

Frobenius norm of the matrix is 7.805310

``````

But, additionally, you can also compute matrix multiplication between TT-matrices

``````

In [6]:

vector_tt = t3f.random_matrix(((3, 3, 3), (1, 1, 1)), tt_rank=3)
matvec_tt = t3f.matmul(a_matrix_tt, vector_tt)
# Check that the result coinsides with np.matmul.
matvec_expected = np.matmul(t3f.full(a_matrix_tt), t3f.full(vector_tt))
difference = np.linalg.norm(matvec_expected - t3f.full(matvec_tt))
print('Difference between multiplying matrix by vector in '
'the TT-format and then converting the result into '
'dense vector and multiplying dense matrix by '
'dense vector is %f.' % difference)

``````
``````

Difference between multiplying matrix by vector in the TT-format and then converting the result into dense vector and multiplying dense matrix by dense vector is 0.000001.

``````