Linear Map: Uniform Sampling

Copyright (C) 2014-2019 The BET Development Team

This example solves a stochastic inverse problem for a linear 3-to-2 map. We refer to the map as the QoI map, or just a QoI. We refer to the range of the QoI map as the data space.

The 3-D input space is discretized with i.i.d. uniform random samples or a regular grid of samples. We refer to the input space as the parameter space, and use parameter to refer to a particular point (e.g., a particular random sample) in this space. A reference parameter is used to define a reference QoI datum and a uniform probability measure is defined on a small box centered at this datum.

The measure on the data space is discretized either randomly or deterministically, and this discretized measure is then inverted by BET to determine a probability measure on the parameter space whose support contains the measurable sets of probable parameters.

We use emulation to estimate the measures of sets defined by the random discretizations. 1D and 2D marginals are calculated, smoothed, and plotted.


In [ ]:
import numpy as np
import bet.calculateP.simpleFunP as simpleFunP
import bet.calculateP.calculateP as calculateP
import bet.sample as samp
import bet.sampling.basicSampling as bsam
from myModel import my_model

Characterize Parameter Space

Define the sampler that will be used to create the discretization object, which is the fundamental object used by BET to compute solutions to the stochastic inverse problem. The sampler and my_model is the interface of BET to the model, and it allows BET to create input/output samples of the model.


In [ ]:
sampler = bsam.sampler(my_model)

# Initialize 3-dimensional input parameter sample set object
input_samples = samp.sample_set(3)

# Set parameter domain
input_samples.set_domain(np.repeat([[0.0, 1.0]], 3, axis=0))

Suggested Changes

Try with and without random sampling.

If using random sampling, try num_samples = 1E3 and 1E4. What happens when num_samples = 1E2? Try using 'lhs' instead of 'random' in the random_sample_set.

If using regular sampling, try different numbers of samples per dimension.


In [ ]:
# Generate samples on the parameter space
randomSampling = False
if randomSampling is True:
    input_samples = sampler.random_sample_set('random', input_samples, num_samples=1E3)
else:
    input_samples = sampler.regular_sample_set(input_samples, num_samples_per_dim=[15, 15, 10])

Characterize Data Space

Compute the output distribution simple function approximation by propagating a different set of samples to implicitly define a Voronoi discretization of the data space, corresponding to an implicitly defined set of contour events defining a discretization of the input parameter space.

The probabilities of the Voronoi cells in the data space (and thus the probabilities of the corresponding contour events in the input parameter space) are determined by Monte Carlo sampling using a set of i.i.d. uniform samples to bin into these cells.

Suggested Changes

A standard Monte Carlo (MC) assumption is that every Voronoi cell has the same volume. If a regular grid of samples was used, then the standard MC assumption is true.

See what happens if the MC assumption is not assumed to be true, and if different numbers of points are used to estimate the volumes of the Voronoi cells.


In [ ]:
MC_assumption = True
# Estimate volumes of Voronoi cells associated with the parameter samples
if MC_assumption is False:
    input_samples.estimate_volume(n_mc_points=1E5)
else:
    input_samples.estimate_volume_mc()

# Create the discretization object using the input samples
my_discretization = sampler.compute_QoI_and_create_discretization(input_samples,
                                               savefile = '3to2_discretization.txt.gz')

Solve Problem

Suggested Changes

Try different reference parameters.

Try different ways of discretizing the probability measure on D defined as a uniform probability measure on a rectangle (since D is 2-dimensional) centered at Q_ref whose size is determined by scaling the circumscribing box of D.


In [ ]:
# Define the reference parameter
param_ref = np.array([0.5, 0.5, 0.5])
#param_ref = np.array([0.75, 0.75, 0.5])
#param_ref = np.array([0.75, 0.75, 0.75])
#param_ref = np.array([0.5, 0.5, 0.75])

# Compute the reference QoI
Q_ref =  my_model(param_ref)

In [ ]:
randomDataDiscretization = False
if randomDataDiscretization is False:
    simpleFunP.regular_partition_uniform_distribution_rectangle_scaled(
        data_set=my_discretization, Q_ref=Q_ref, rect_scale=0.25,
        cells_per_dimension = 3)
else:
    simpleFunP.uniform_partition_uniform_distribution_rectangle_scaled(
        data_set=my_discretization, Q_ref=Q_ref, rect_scale=0.25,
        M=50, num_d_emulate=1E5)

# calculate probabilities
calculateP.prob(my_discretization)

In [ ]:
%store my_discretization
%store param_ref
%store Q_ref