Point Collocation

This tutorial uses the same example as the example introduction. In other words:


In [1]:
import numpy
from matplotlib import pyplot
import chaospy
from chaospy.example import (
    coordinates, exponential_model, distribution)

Point collection method is a broad term, as it covers multiple approaches. But in a nutshell:

  • Generate $Q_1, ..., Q_N = (I_1, R_1), ..., (I_N, R_N)$, using (pseudo-)random samples or otherwise.
  • Evaluate model predictor $U_1, ..., U_N$ for each sample.
  • Select an expansion or polynomials $P_1, ..., P_M$, typically orthogonal (with respect to the probability distribution).
  • Define the linear regression problem: $U_n = \sum_m c_m(t) P_m(I_n, R_n)$ and solve for $c_1, ..., c_M$.
  • Analyze model approximation $u(t; I, R) = \sum_m c_m(t) P_n(I, R)$ instead of actual model solver.

This approach does not require the samples to follow any distribution, nor does it require the polynomial expansion to have any particular properties, like e.g. orthogonality. In practice however, when focusing on statistical properties (like mean and variance), it makes sense to have the number of samples in an area roughly proportional to the probability density function. This can be achieved in a few ways, but most common is to either use random samples/low discrepancy sequences, or abscissas from a quadrature scheme. And in the same way, even though the polynomials do not have to be orthogonal, in practice using orthogonal polynomials are shown to be more numerical stable.

Below, we will compare the Halton sequence and compare it to optimal Gaussian quadrature.

Generating samples

In chaospy, creating low-discrepancy sequences can be done using the distribution.sample method. Creating quadrature points can be done using chaospy.generate_quadrature.


In [2]:
# Sampling schemes we want to include
rules = ["halton", "gaussian"]

# Generate samples for different orders:
samples = {
    "halton": distribution.sample((10+1)**2, rule="halton"),
    "gaussian": chaospy.generate_quadrature(
        10, distribution, rule="gaussian")[0]
}

samples["halton"].shape, samples["gaussian"].shape


Out[2]:
((2, 121), (2, 121))

The two schemes represents two different approaches. One is meant to mimic random samples, while the other is designed for quadrature.


In [3]:
# increase figure width to better fit two plots
pyplot.rc("figure", figsize=[9, 4])

for idx, rule in enumerate(rules):
    pyplot.subplot(1, 2, idx+1)
    
    # Show samples as 2-dimensional scatter plot
    pyplot.scatter(*samples[rule], marker="x", color="k")
    
    # Make scatter ficutres pretty
    pyplot.xlabel("init (normal)")
    pyplot.ylabel("rate (uniform)") if not idx else None
    pyplot.axis([0.9, 2.1, 0.1, 0.2])
    pyplot.title(rule)

pyplot.show()


Evaluating model predictor

Creating model predictions is fairly straight forward. Just note that the (random) samples generated has shape (dimensions, number_of_samples). As such, it is important to transpose when iterating over samples.


In [4]:
# Evaluate model predictor for every sample
evaluations = {
    rule: numpy.array([exponential_model(sample)
                       for sample in samples[rule].T])
    for rule in rules
}

evaluations["halton"].shape, evaluations["gaussian"].shape


Out[4]:
((121, 1000), (121, 1000))

These model evaluations can be visualized, however ti is hard to observe the difference between the two schemes from these plots.


In [5]:
for idx, rule in enumerate(rules):
    pyplot.subplot(1, 2, idx+1)
    
    # Make a plot line for the first 50 evaluations
    for evals in evaluations[rule][::3]:
        pyplot.plot(coordinates, evals, "k-", alpha=0.3)
    
    # Make plot pretty
    pyplot.xlabel("coordinates $t$")
    pyplot.ylabel("model evaluations $u$") if not idx else None
    pyplot.title(rule)
    pyplot.axis([0, 10, 0, 2])

pyplot.show()


Select expansion of Polynomials

Creating polynomial expansion can be done using the three terms recursion algorithm:


In [6]:
polynomial_order = 4
polynomial_expansion = chaospy.orth_ttr(
    polynomial_order, distribution)
polynomial_expansion[:6].round(5)


Out[6]:
polynomial([1.0, q1-0.15, q0-1.5, q1**2-0.3*q1+0.02167,
            q0*q1-1.5*q1-0.15*q0+0.225, q0**2-3.0*q0+2.21])

Here the polynomials are function of two variables: $q_0, q_1$, which in the context of the order of distribution refer to respectively $I$ and $a$ in the original problem.

The created expansion is orthogonal with respect to each other in a weighted Hilbert space:

$$ \langle \Phi_n, \Phi_m \rangle = \mbox E(\Phi_n\Phi_m) = \delta_{nm} \|\Phi_n\|^2 \qquad \|\Phi_n\| = \sqrt{\langle\Phi_n, \Phi_n\rangle} $$

where $\delta_{nm}$ is the Kronecker's delta.

In other words:


In [7]:
outer_product = chaospy.outer(
    polynomial_expansion[:6], polynomial_expansion[:6])
normed_inner_products = (
    chaospy.E(outer_product, distribution)/
    chaospy.E(polynomial_expansion[:6]**2, distribution)
)
normed_inner_products.round(10)


Out[7]:
array([[ 1.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  1.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1., -0.,  0.],
       [ 0.,  0.,  0., -0.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  1.]])

It is worth noting that scaling upwards, calculating $\mathbb E[\Phi_n^2]$ is known to tend to be numerically unstable. It is more accurately calculated either using some quadrature scheme, or using the coefficients from the three terms recursion. The latter can be retrieved by passing the flag retall=True to chaospy.orth_ttr.


In [8]:
_, norms = chaospy.orth_ttr(
    polynomial_order, distribution, retall=True)
norms[:3], chaospy.E(polynomial_expansion[:3]**2, distribution)


Out[8]:
(array([1.00000000e+00, 8.33333333e-04, 4.00000000e-02]),
 array([1.00000000e+00, 8.33333333e-04, 4.00000000e-02]))

Solve linear problem

With all samples $Q_1, ..., Q_N$, model evaluations $U_1, ..., U_N$ and polynomial expansion $\Phi_1, ..., \Phi_M$, we can put everything together to solve the equations:

$$ U_n = \sum_{m=1}^M c_m(t) \Phi_m(Q_n) \qquad n = 1, ..., N $$

with respect to the coefficients $c_1, ..., c_M$.

This can be done using the helper function chaospy.fit_regression:


In [9]:
model_approximations = {
    rule: chaospy.fit_regression(
        polynomial_expansion, samples[rule], evaluations[rule])
    for rule in rules
}
(model_approximations["halton"][:4].round(3),
 model_approximations["gaussian"][:4].round(3))


Out[9]:
(polynomial([q0, -0.01*q0*q1+q0, -0.02*q0*q1+q0, -0.03*q0*q1+q0]),
 polynomial([q0, -0.01*q0*q1+q0, -0.02*q0*q1+q0, -0.03*q0*q1+q0]))

This approximation can also be done manually, using the solution for least squares:

$$ [\hat c_m]_m = \left([\Phi_m(Q_n)]_{mn}^T [\Phi_m(Q_n)]_{mn}\right)^{-1} [\Phi_m(Q_n)]_{mn}^T [U_n]_n $$

Or using code:


In [10]:
phi = polynomial_expansion(*samples["halton"]).T
ptp = phi.T @ phi
pty = phi.T @ evaluations["halton"]
c_hat = numpy.linalg.inv(ptp) @ pty
c_hat.shape


Out[10]:
(15, 1000)

The coefficients can then be combined together with the original polynomial expansion:


In [11]:
alternative_approximation = chaospy.sum(
    c_hat.T*polynomial_expansion, axis=1).T
alternative_approximation[:4].round(3)


Out[11]:
polynomial([q0, -0.01*q0*q1+q0, -0.02*q0*q1+q0, -0.03*q0*q1+q0])

Assess Statistics


In [12]:
for idx, rule in enumerate(rules):
    pyplot.subplot(1, 2, idx+1)
    
    # Estimate mean and variance
    expected = chaospy.E(
        model_approximations[rule], distribution)
    variance = chaospy.Var(
        model_approximations[rule], distribution)

    # Create band one standard deviation away from mean 
    pyplot.fill_between(coordinates, expected-variance**0.5,
                        expected+variance**0.5, alpha=0.3,
                        color="k", label="standard deviation")
    pyplot.plot(coordinates, expected, "k-",
                label="Estimated expected value")

    # Make plot pretty
    pyplot.xlabel("coordinates $t$")
    pyplot.axis([0, 10, 0, 2])
    pyplot.title(rule)
    if idx:
        pyplot.legend()
    else:
        pyplot.ylabel("Model evaluations $u$")
    
pyplot.show()


Error Analysis

It is hard to assess how well these models are doing from the final estimation alone. They look about the same. So to compare results, we do error analysis. To do so, we use the reference analytical solution and error function as defined in example introduction.


In [13]:
from chaospy.example import error_mean, error_variance

To illustrate the difference in a common format, we plot error as a function of the number of samples used in a logarithmic plot.


In [14]:
# Estimate mean and variance for various number of samples
polynomial_orders = list(range(2, 6))
for idx, rule in enumerate(rules):
    pyplot.subplot(1, 2, idx+1)
    
    sample_sizes = []
    errors_mean = []
    errors_variance = []
    for order in polynomial_orders:
        
        # Perform analysis for a specific order
        samples = (
            distribution.sample((order+1)**2, rule="halton")
            if rule == "halton"
            else chaospy.generate_quadrature(
                order, distribution, rule="gaussian")[0]
        )
        evaluations = [exponential_model(sample)
                       for sample in samples.T]
        polynomial_expansion = chaospy.orth_ttr(
            order, distribution)
        model_approximation = chaospy.fit_regression(
            polynomial_expansion, samples, evaluations)
        
        # Store results
        sample_sizes.append((order+1)**2)
        errors_mean.append(error_mean(
            chaospy.E(model_approximation, distribution)))
        errors_variance.append(error_variance(
            chaospy.Var(model_approximation, distribution)))
    
    
    # Error plot for mean
    pyplot.loglog(sample_sizes, errors_mean,
                  "ko-", label="mean")
    
    # Error plot for variance
    pyplot.loglog(sample_sizes, errors_variance,
                  "ko--", label="variance")
    
    # Make plot pretty
    pyplot.axis([
        min(sample_sizes), max(sample_sizes), 1e-16, 1e-3])
    pyplot.title(rule)
    pyplot.xlabel("Number of samples")
    pyplot.legend() if idx else pyplot.ylabel("Error")
    
pyplot.show()


It is worth noting that Halton is not doing poorly, as much as the Gaussian approach doing so much better. Though this can be the case some times, the effect most often drops drastically with the number of dimensions.