Example 3

Use the CSDMS Dakota interface in Python to perform a centered parameter study on HydroTrend and evaluate the output.

Use pylab magic:


In [ ]:
%pylab inline

And include other necessary imports:


In [ ]:
import os
import shutil

Set up and run the experiment

Start by importing the Dakota class:


In [ ]:
from dakota.core import Dakota

Next, create a new Dakota instance to perform a centered parameter study:


In [ ]:
d = Dakota(method='centered_parameter_study')

Note that method string follows the syntax of Dakota keywords; e.g., centered_parameter_study.

The Dakota instance comes with a few predefined attributes, including the method attribute, which is used to set up the experiment.


In [ ]:
d.__dict__

Pull out the method attribute to save some time typing in the following steps:


In [ ]:
m = d.method
m.__dict__

Configure the Dakota instance to run an experiment on HydroTrend in the examples/3-Python directory of this project:


In [ ]:
m.component = 'hydrotrend'
m.interface = 'fork'
m.run_directory = '../examples/3-Python/'

The fork interface is used when Dakota calls an executable on the file system.

In this experiment, let's explore the effects of mean river velocity $u$ [$m\,s^{-1}$] and the constant annual base flow $q_0$ [$m^3 s^{-1}$] on the median daily bedload at the river mouth $Q_b$ [$kg\,s^{-1}$] for one year of simulation time. All HydroTrend parameters that are not included in the parameter study are held constant.

First, configure the inputs:


In [ ]:
m.variable_descriptors = ['river_mean_velocity', 'base_flow']
m.initial_point = [1.0, 1.0]
m.step_vector = [0.2, 0.25]
m.steps_per_variable = [3, 2] # in each direction

Note (again) that the attribute names match the Dakota keyword names for a centered parameter study. The steps_per_variable attribute is tricky: this is the number of steps in each direction from the initial point; e.g, river_mean_velocity will be evaluated at a total of 7 locations.

From these attributes, calculate the parameter values at which HydroTrend will be evaluated. Use the helper function calc_vector.


In [ ]:
def calc_vector(start, step_size, n_steps_per_direction):
    """Calculate a vector from a center, step size and number of steps."""
    v = []
    for i in range(len(start)):
        v_start = start[i] - step_size[i]*n_steps_per_direction[i]
        v_stop = start[i] + step_size[i]*n_steps_per_direction[i]
        v.append(numpy.linspace(v_start, v_stop, 2*n_steps_per_direction[i]+1))
    return v

In [ ]:
u, q_0 = calc_vector(m.initial_point, m.step_vector, m.steps_per_variable)
print 'u =', u
print 'q_0 =', q_0

Make a quick plot to visualize the evaluation nodes in parameter space:


In [ ]:
x = [mean(q_0)]*len(q_0)
y = [mean(u)]*len(u)
plot(x, q_0, 'ro')
plot(u, y, 'ro')
xlim((0, 2))
ylim((0, 2))
xlabel('$u (m\,s^{-1})$')
ylabel('$q_0 (m^3\,s^{-1})$')
title('Centered parameter study')

Next, set up the responses from HydroTrend used by Dakota. Each of these must be a list or a tuple. (This is a little clumsy.)


In [ ]:
m.response_descriptors = 'Qb_median', # must be list or tuple
m.response_statistics = 'median',
m.response_files = 'HYDROASCII.QB',

HydroTrend requires a hypsometry file. The default Waipaoa hypsometry file is included in the run directory. Link it to the input_files attribute of the centered parameter study object, which also must be a tuple or a list:


In [ ]:
m.input_files = os.path.join(m.run_directory, 'HYDRO0.HYPS'),

HydroTrend requires an input file that provides all of the parameter values needed for a run. Dakota requires a modified version of this file, called a template file, with the values of the parameters Dakota is operating on replaced with their corresponding descriptors. When Dakota runs, it replaces the descriptors with actual values. All parameters that are not included in the parameter study are held constant.

Make a template file for this parameter study using:

  • the HydroTrend component template file
  • a base HydroTrend input file, with default parameter values
  • the parameters to replace

The first two files are included in the run directory for this study:


In [ ]:
base_tmpl_file = os.path.join(m.run_directory, 'hydrotrend.in.tmpl')
base_input_file = os.path.join(m.run_directory, 'HYDRO.IN.defaults')

Create the template file and move it to the run directory:


In [ ]:
from dakota.plugins.hydrotrend import HydroTrend
tmpl_file = HydroTrend.write_tmpl_file(base_tmpl_file, base_input_file, m.variable_descriptors)
shutil.move(tmpl_file, m.run_directory)
m.template_file = os.path.join(m.run_directory, tmpl_file)

In the final setup step, set the Dakota analysis driver. This is easy, because it's always the same:


In [ ]:
m.analysis_driver = 'dakota_run_plugin'

The dakota_run_plugin script automates the actions needed for the analysis driver.

This might be a good time to review the settings for the experiment:


In [ ]:
d.method.__dict__

All of the parameters for the experiment have been configured. For the remaining steps leading up to running the experiment, let's switch to the run directory:


In [ ]:
os.chdir(m.run_directory)

To help Dakota and Hydrotrend communicate, a configuration file, containing all the experiment parameters, is used. Create it with:


In [ ]:
d.write_configuration_file('config.yaml')

Finally, use the information added to the Dakota object to write the Dakota input file:


In [ ]:
d.write_input_file()

Run Dakota!


In [ ]:
d.run()

Evaluate the results

Get a directory listing:


In [ ]:
%ls

View the tabular data file:


In [ ]:
%cat dakota.dat

Read the tabular data into this notebook:


In [ ]:
data = numpy.loadtxt(m.data_file, skiprows=1, unpack=True, usecols=[0,2,3,4])

Make a plot of the experimental results:


In [ ]:
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.scatter(data[1,], data[2,], data[3,], depthshade=False)
ax.set_xlabel('$u \,(m \, s^{-1})$')
ax.set_ylabel('$q_0 \,(m^3 s^{-1})$')
ax.set_zlabel('$Q_b \,(kg \, s^{-1})$')

Modify the experiment

  • Use the configuration file created in the initial experiment to create a new Dakota instance.
  • Modify its settings.
  • Save configuration.
  • Write new Dakota input file.
  • Run modified experiment.

The configuration file created above can be used make to Dakota instance:


In [ ]:
e = Dakota.from_file_like('config.yaml')

The settings for this new instance can be modified to create a new Dakota experiment.