The lazy Tensor Algebra Expression Library

Who we are

QuantStack - a small Open Source software consultancy in Paris


In [ ]:
#include <iostream>

std::cout << "Hello derse19!" << std::endl;

N-D arrays are everywhere

  • ... except for C++
  • Basic building block of data science
  • Images, Videos, Music, Point Clouds – all are expressed as N-dimensional data


In [ ]:
#include <xtensor/xio.hpp>
#include <xtensor/xarray.hpp>

xt::xarray<double> a = {{1,2,3}, {4,5,6}};

std::cout << a << std::endl;

STL interface


In [ ]:
for (auto& el : a)
{
    std::cout << el << std::endl;
}

In [ ]:
a(0, 1) = 10;

std::cout << a << std::endl;

In [ ]:
std::sort(a.begin(), a.end());

std::cout << a << std::endl;

In [ ]:
std::cout << xt::transpose(a) << std::endl;

In [ ]:
a.reshape({1, 1, 6, 1});

std::cout << a << std::endl;

In [ ]:
a.reshape({2, 3});

Functions


In [ ]:
auto xfunc = sin(a) * 2;

std::cout << xfunc << std::endl;

Expression Templates


In [ ]:
template <class T>
void print_type(T) { std::cout << __PRETTY_FUNCTION__ << std::endl; }

In [ ]:
print_type(xfunc);

In [ ]:
auto xfunc_el = xfunc(0, 1);
std::cout << xfunc_el << std::endl;

In [ ]:
xt::xarray<double> xfunc_result = xfunc;

std::cout << xfunc_result << std::endl;

Broadcasting

  • Automagically extend operands to match shape
  • Broadcasting rules copied from NumPy


In [ ]:
std::cout << a << std::endl;

In [ ]:
xt::xarray<double> brcast = {-10, +1, -10};
std::cout << brcast << std::endl;

In [ ]:
std::cout << a * brcast << std::endl;

Memory Layout

  • Freely choose row, column-major or custom strides

In [ ]:
xt::xarray<double, xt::layout_type::column_major> a_cm = {{1,2,3}, {4,5,6}};

std::cout << a_cm << std::endl;

Container optimizations


In [ ]:
// uses a std::array for shape & strides
#include <xtensor/xtensor.hpp>

xt::xtensor<double, 2> b = {{1, 2, 3}, {4, 5, 6}};
std::cout << b << std::endl;

In [ ]:
// uses a fixed size container for data (std::array)
// strides computed constexpr at compile time (no storage)
#include <xtensor/xfixed.hpp>

xt::xtensor_fixed<double, xt::xshape<2, 3>> c = {{1,2,3}, {4,5,6}};
std::cout << c << std::endl;

Views

  • powerful views to reference sub-arrays
  • compute shapes statically if possible


In [ ]:
#include <xtensor/xview.hpp>
#include <xtensor/xio.hpp>
using namespace xt::placeholders;

std::cout << a << std::endl;

In [ ]:
std::cout << xt::view(a, 1, xt::all()) << std::endl;

In [ ]:
// numpy equivalent: a[:, ::2]
std::cout << xt::view(a, xt::all(), xt::range(_, _, 2)) << std::endl;

In [ ]:
xt::view(a, xt::all(), xt::range(_, _, 2)) *= 100;

std::cout << a << std::endl;

In [ ]:
std::cout << a << std::endl;

In [ ]:
std::cout << xt::view(a, xt::keep(0, 0, 0) , xt::all()) << std::endl;

Adapting a 1D container


In [ ]:
#include <vector>
#include <xtensor/xadapt.hpp>

std::vector<double> vdata = {1, 2, 3, 4, 5, 6, 7, 8, 9};
auto adapted_vdata = xt::adapt(vdata, {3, 3});

In [ ]:
std::cout << adapted_vdata << std::endl;

In [ ]:
adapted_vdata(1, 2)

In [ ]:
#include <xtensor/xmath.hpp>

std::cout << sum(adapted_vdata) << std::endl;

In [ ]:
std::cout << sum(adapted_vdata, {1}) << std::endl;

NumPy to xtensor Cheatsheet

Many functions that were not discussed:

  • More reducers: mean, prod ...
  • Accumulators: cumsum, cumprod ...
  • Random numbers, generators
  • Sorting, filtering
  • Optional (masked) values
  • NPY, CSV file format support

https://xtensor.readthedocs.io/en/latest/numpy.html

xtensor

  • Modular sub-packages:
    • xsimd: for SIMD vectorization support (batch classes, hi-speed implementations of trigo, exp, ... functions)
    • Intel TBB: parallelization
  • On top of xtensor:
    • xtensor-blas: bindings to BLAS & LAPACK – matrix multiplication, inverse, svd, ...
    • xtensor-io: image, sound, NPZ, HDF5, (soon video support)
    • xtensor-fftw, xtensor-interpolate and more

xtensor as a building block

  • xtensor-python / julia / R
  • Seamlessly use arrays from each language
  • Better performance for dynamic languages

Rayshading with xtensor

Benchmarks

  • Standalone C++: 0.016s
  • Python + NumPy: 14.22s
  • Python + xtensor: 0.022s
  • Julia: 0.014s
  • Julia + xtensor: 0.014s
  • R: 9.905s
  • R + xtensor: 0.013s

Thirdparty packages using xtensor

  • More and more packages adopting xtensor

  • C++: xtensor-fftw, xtensor-interpolate, tinydnn

  • Python: z5 (data format), cppcolormap
  • R: rray (broadcasting n-dim matrices for R!)

Binding a Python / NumPy container

double sum_of_sines(xt::pyarray<double>& m)
{
    auto sines = xt::sin(m);  // sines does not actually hold values.
    return std::accumulate(sines.begin(), sines.end(), 0.0);
}

PYBIND11_MODULE(xtensor_python_test, m)
{
    xt::import_numpy();
    m.doc() = "Test module for xtensor python bindings";

    m.def("sum_of_sines", sum_of_sines, "Sum the sines of the input values");
}

Thanks

Check out https://github.com/QuantStack/xtensor

Follow on Twitter:

  • @JohanMabille
  • @wuoulf
  • @QuantStack

xtensor vs Blitz++

  • No runtime dimension container in Blitz (xarray)
  • Blitz++ unmaintained/dead
  • Legacy code, already broken with new compilers

xtensor vs Eigen3

  • n-Dimensions from the start (Eigen has Tensor Module in /unsupported)
  • modularity vs monolithical Eigen
  • API subjectively "nicer", modern C++ features
  • extensibility (Python/Julia/R)