Problem Definition

Let's put some heat on the MFO logo! In other words, we want to solve the stationary heat equation

\begin{align*} - \nabla \cdot (a_{\mu}(x)\nabla u_\mu(x)) &= 0 && x \in [0, 1]^2 \\ a_{\mu}(x) \nabla u_\mu(x) \cdot n &= 1 && x \in [0, 1] \times \{0\} \\ u_\mu(x) &= 0 && x \in \partial [0, 1]^2 \setminus [0, 1] \times \{0\} \end{align*}

for various parameters $\mu = (\mu_{bnd}, \mu_{in}) \in \mathcal{P} := [1, 100]^2$. The diffusivity $a_\mu$ is given by

\begin{equation} a_{\mu}(x) = a_{out}(x) + \mu_{bnd}a_{bnd}(x) + \mu_{in}a_{in}(x) \end{equation}

where $a_{out}(x) \in [0, 100]$, $a_{bound}(x), a_{in}(x) \in [0, 1]$ are defined by the following grayscale images with black indicating the lowest and white indicating the largest possible value:


In [1]:
from IPython.display import HTML
HTML("""<table><tr>
<td><img src="files/mfo_outer.png"><td><img src="files/mfo_boundary.png"><td><img src="files/mfo_inner.png">
</tr></table>""")


Out[1]:

pyMOR's discretization framework

pyMOR comes with a built-in discretization framework to get you started quickly. In the following steps we will create a parameterized high-dimensional discretization that solves the given problem. (Since pyMOR is still a young project, it currently only offers very basic finite element and finite volumes discretizations. However, our architecture is quite flexible and it should be easy to implement higher order cg or dg schemes for small to medium-sized problems.)

1. Imports

First, we import everything we need for discretizing the problem. As you can see, pyMOR is heavily modularized. Extensive API documentation can be found under http://pymor.readthedocs.org/latest/. (As an alternative, IPython has several features for directly viewing documentation from within the IPython console or notebook. Take a brief look at the IPython help for more information.)


In [ ]:
import numpy as np

from pymor.core.cache import clear_caches
from pymor.core.logger import set_log_levels
from pymor.analyticalproblems.elliptic import EllipticProblem
from pymor.domaindescriptions.basic import RectDomain
from pymor.domaindescriptions.boundarytypes import BoundaryType
from pymor.discretizers.elliptic import discretize_elliptic_cg
from pymor.playground.functions.bitmap import BitmapFunction
from pymor.functions.basic import ConstantFunction
from pymor.parameters.functionals import ProjectionParameterFunctional
from pymor.parameters.spaces import CubicParameterSpace

clear_caches()  # Clears all of pyMOR's active cache regions (see pymor.core.cache).
                # This ensures that high-dimensional solutions are actually calculated
                # and not fetched from the persistent on-disk cache from previous pyMOR
                # sessions. (We only do this so that you can see how long solving the
                # high-dimensional problem takes.)

set_log_levels({'pymor': 'INFO'})  # Let pyMOR be very verbose about what it is doing.

2. Analytical Problem

Next we create an analytical problem which defines the PDE we want to solve:


In [ ]:
p = EllipticProblem(domain=RectDomain(bottom=BoundaryType('neumann')),
                    rhs=ConstantFunction(0., dim_domain=2),
                    diffusion_functions=[BitmapFunction('mfo_outer.png', range=[0., 100.]),
                                         BitmapFunction('mfo_boundary.png', range=[0., 1.]),
                                         BitmapFunction('mfo_inner.png', range=[0., 1.])],
                    diffusion_functionals=[1.,
                                           ProjectionParameterFunctional('boundary', tuple()),
                                           ProjectionParameterFunctional('inner', tuple())],
                    neumann_data=ConstantFunction(-1., dim_domain=2),
                    parameter_space=CubicParameterSpace({'boundary': tuple(), 'inner': tuple()},
                                                        minimum=1., maximum=100.),
                    name='MFO_problem')

A function (see pymor.functions.interfaces.FunctionInterface) in pyMOR is a vectorized mapping from coordinates to arbitrarily shaped NumPy arrays. Parameter functionals (see pymor.parameters.interfaces.ParameterFunctionalInterface) map parameters to scalars.

A parameter in pyMOR is a dictionary with strings as keys and NumPy arrays of a certain shape as values (e.g. mu = {'speed': 3., 'diffusivity': np.array([[4., 1.], [1., 4.]])}). Every object in pyMOR which depends on parameters has a so called parameter_type which defines which parameters (i.e. what keys with what shapes) the object requires. (The type of mu defined above would be {'speed': tuple(), 'diffusivity': (2, 2)}.)

A parameter space simply is a set of parameters (of a common parameter type) from which samples can be drawn.

3. Discretization

After defining the problem, we can now discretize it by using one of pyMOR's discretizers. (See pymor.discretizers). The following will use linear finite elements on a structured triangular mesh.


In [ ]:
d, data = discretize_elliptic_cg(p, diameter=np.sqrt(2)/100)

This call returns the Discretization d (see pymor.discretizations.interfaces.DiscretizationInterface) along with a dictionary containing additional data resulting from the discretization process. (Currently only the grid and a BoundaryInfo object are returned. The BoundaryInfo identifies the boundary types associated with the boundary entities of the grid.)

We can solve the discretization and visualize the result like so:


In [ ]:
U = d.solve(mu=[1., 100.])
d.visualize(U)

The call to solve returns a VectorArray (see pymor.la.interfaces) of length 1 (for stationary problems) containing the discrete solution vector.

As you can see, pyMOR tries to be smart when parsing parameters. If all parameter components are scalars, you can simply provide a list of the values, assuming alphabetical ordering of the keys.

You can also visualize multiple solutions, by passing a tuple of VectorArrays:


In [ ]:
d.visualize((U, U * 2))

4. Exercises - Part 1

Before we go on with the model order reduction, let's play a bit with the discretization we are given:

  1. Try solving the discretization for different parameters and visualize the results.

  2. Compare two solutions by visualizing both, along with their difference. (You can simply subtract compatible VectorArrays from each other to obtain a new array containing the difference.)

  3. Create a VectorArray containing the solutions for fixed $\mu_{bnd}=1$ and $\mu_{in}$ taking 20 equidistant values from 1 to 100. Visualize the array to obtain an animation.
    • Use VectorArray.append to append vectors from one VectorArray to another.
    • To obtain a loop over the needed parameters use
      for mu_in in np.linspace(1., 100., 20):
         ...
    • You can create an empty VectorArray of the correct type using d.solution_space.empty().
  4. Try to apply the discrete operator (use d.operator.apply) for different parameters and visualize the result.
  5. Do the same for the affine components of the operator (d.operator.operators[i]).

Enter your code in the following cell. (Add as much additional cells as you like.)


In [ ]:


Model Order Reduction with pyMOR

We are now leaving the realm of pyMOR's discretization framework. All code that follows uses completely generic algorithms which work with any Discretizations, Operators and VectorArrays that implement pyMOR's interfaces. These discretizations could live in dynamically linked C/C++ extension modules, could be read from data files on disk or could reside in a compute server controlled via a network protocol. No direct access to any high-dimensional data is required.

When ready, pyMOR will also be able to automatically access discretizations which implement the OpenInterfaces specifications (http://www.openinterfaces.org).

5. More Imports

We now import some of pyMOR's model order reduction algorithms:


In [ ]:
from functools import partial

from pymor.algorithms.greedy import greedy
from pymor.algorithms.basisextension import gram_schmidt_basis_extension
from pymor.parameters.functionals import ExpressionParameterFunctional
from pymor.reductors.stationary import reduce_stationary_coercive

6. The Reductor

In pyMOR, discretizations are reduced using reductors (see pymor.reductors), which are given the high-dimensional discretization along with some data needed for the reduction (e.g. the reduced basis). In return, we obtain a reduced discretization and a reconstructor which transforms reduced solution VectorArrays into high-dimensional VectorArrays associated with the original discretization (e.g. by forming the linear combination with the reduced basis stored in the reconstructor).


In [ ]:
reductor = partial(reduce_stationary_coercive,
                   error_product=d.h1_product,
                   coercivity_estimator=ExpressionParameterFunctional("(1 + 1/pi)*min((100, boundary, inner))",
                                                                      d.parameter_type))

The reductor reduce_stationary_coercive reduces the problem via a generic Galerkin RB-projection (by calling pymor.reductors.basic.reduce_generic_rb) and then assembles a residual based error estimator for the reduction error (using pymor.reductors.residual.reduce_residual). It needs to be given a ParameterFunctional for estimating the coercivity constant of the problem for the given parameter.

The partial function above produces a new function reductor which is the same function as reduce_stationary_coercive, but with fixed values for the arguments error_product and coercivity_estimator.

Discretizations in pyMOR are usually simply containers for Operators along with an algorithm which uses these operators to obtain a solution of the problem. As the RB-projection does not change the structure of the problem, the reductor will return a discretization of the same type (pymor.discretizations.basic.StationaryDiscretization), only the operators are replaced by their RB-projections.

7. Basis Generation

Next, we fire up our greedy basis generation algorithm (see pymor.algorithms.greedy). We will need to pass it the high-dimensional discretization, our reductor and a basis extension algorithm which is used to augment the reduced basis by the new solution snapshot (in our case the snapshot is simply added to the basis after orthonormalizing it using the Gram-Schmidt algorithm). Moreover, a training set of parameters must be supplied, which we sample uniformly from the discretization's parameter space.


In [ ]:
basis_extension = partial(gram_schmidt_basis_extension,
                          product=d.h1_product)

greedy_data = greedy(d,
                     reductor,
                     d.parameter_space.sample_uniformly(20),
                     extension_algorithm=basis_extension,
                     max_extensions=10)

rd, rc = greedy_data['reduced_discretization'], greedy_data['reconstructor']

Let's solve the reduced discretization for some mu:


In [ ]:
u = rd.solve(mu=[45., 20.])
u

As expected, the solution vector has 10 entries, the coordinates w.r.t. the computed reduced basis. We can compare the reduced solution with the high-dimensional solution like so:


In [ ]:
U_rb = rc.reconstruct(u)
U = d.solve(mu=[45., 20.])
d.visualize((U, U_rb, U-U_rb), legend=('detailed', 'reduced', 'error'),
            separate_colorbars=True)

8. Exercises - Part 2

  1. Take a look at the reduced basis (which can be accessed as rc.RB) by visualizing it.
  2. Check if the reduced basis is really orthonormal w.r.t the H1-product using d.h1_product.apply2(rc.RB, rc.RB, pairwise=True).
  3. Compute the maximum reduction error on 20 randomly sampled parameters. Visualize the detailed and reduced solutions as well as their difference for the parameter maximizing the reduction error.
    • To loop over the parameters use
      for mu in d.parameters_space.sample_randomly(20):
          ...
      To store the parameters in a list, use:
      mus = list(d.parameters_space.sample_randomly(20))
      Note: For reproducibility reasons, sample_randomly will always produce the same parameters if not given a different random seed. If this is unwanted, create a new random state using pymor.tools.random.new_radom_state and pass the same random state to subsequent calls of sample_randomly.
    • You can compute the H1- or L2-norm of a high-dimensional or reduced solution vector using d.h1_norm, d.l2_norm or rd.h1_norm, rd.l2_norm.
  4. Create a plot "maximum reduction error vs. basis size".
    • Use pymor.reductors.basic.reduce_to_subbasis to quickly obtain reduced discretizations for smaller basis sizes.
    • For creating the plot, we can use matplotlib which offers with the pyplot submodule plotting commands which are similar to MATLAB's plotting facilities. To embed the plots into this notebook use
      %matplotlib inline
      import matplotlib.pyplot as plt
      Then you can use plt.plot or plot.semilogy, etc, for plotting.
  5. Compute the speedup obtained by the model reduction using time.time from the Python standard library. (Remember to use clear_caches() for a fair comparison.) You will need to form an average over a large set of parameters to obtain reliable numbers.
  6. Try to create another reduced basis using POD (see pymor.la.pod), passing it solution snapshots for uniformly or randomly sampled parameters. Reduce the discretization d with the reductor defined above.
    • Compare the durations of the offline phases for the greedy and POD approaches.
    • Compare the maximum reduction errors for same basis sizes and training sets in the H1 and L2 norms.
    • Use a finer mesh for the high-dimensional discretization and repeat the experiment.
  7. Play with the analytical problem:
    • Change boundary conditions / source term / parameter space.
    • Experiment!

In [ ]: