`sample`

function in the `flux_analysis`

submodule. `sample`

takes at least two arguments: a cobra model and the number of samples you want to generate.

```
In [1]:
```from cobra.test import create_test_model
from cobra.sampling import sample
model = create_test_model("textbook")
s = sample(model, 100)
s.head()

```
Out[1]:
```

`optgp`

method based on the method presented here as it is suited for larger models and can run in parallel. By default the sampler uses a single process. This can be changed by using the `processes`

argument.

```
In [2]:
```print("One process:")
%time s = sample(model, 1000)
print("Two processes:")
%time s = sample(model, 1000, processes=2)

```
```

`achr`

. `achr`

does not support parallel execution but has good convergence and is almost Markovian.

```
In [3]:
```s = sample(model, 100, method="achr")

The sampling process can be controlled on a lower level by using the sampler classes directly.

```
In [4]:
```from cobra.sampling import OptGPSampler, ACHRSampler

`thinning`

factor. "Thinning" means only recording samples every n iterations. A higher thinning factors mean less correlated samples but also larger computation times. By default the samplers use a thinning factor of 100 which creates roughly uncorrelated samples. If you want less samples but better mixing feel free to increase this parameter. If you want to study convergence for your own model you might want to set it to 1 to obtain all iterates.

```
In [5]:
```achr = ACHRSampler(model, thinning=10)

`OptGPSampler`

has an additional `processes`

argument specifying how many processes are used to create parallel sampling chains. This should be in the order of your CPU cores for maximum efficiency. As noted before class initialization can take up to a few minutes due to generation of initial search directions. Sampling on the other hand is quick.

```
In [6]:
```optgp = OptGPSampler(model, processes=4)

`sample`

function described above, only that this time it will only accept a single argument, the number of samples. For `OptGPSampler`

the number of samples should be a multiple of the number of processes, otherwise it will be increased to the nearest multiple automatically.

```
In [7]:
```s1 = achr.sample(100)
s2 = optgp.sample(100)

You can call `sample`

repeatedly and both samplers are optimized to generate large amount of samples without falling into "numerical traps". All sampler objects have a `validate`

function in order to check if a set of points are feasible and give detailed information about feasibility violations in a form of a short code denoting feasibility. Here the short code is a combination of any of the following letters:

- "v" - valid point
- "l" - lower bound violation
- "u" - upper bound violation
- "e" - equality violation (meaning the point is not a steady state)

For instance for a random flux distribution (should not be feasible):

```
In [8]:
```import numpy as np
bad = np.random.uniform(-1000, 1000, size=len(model.reactions))
achr.validate(np.atleast_2d(bad))

```
Out[8]:
```

And for our generated samples:

```
In [9]:
```achr.validate(s1)

```
Out[9]:
```

`validate`

is pretty fast and works quickly even for large models and many samples. If you find invalid samples you do not necessarily have to rerun the entire sampling but can exclude them from the sample DataFrame.

```
In [10]:
```s1_valid = s1[achr.validate(s1) == "v"]
len(s1_valid)

```
Out[10]:
```

Sampler objects are made for generating billions of samples, however using the `sample`

function might quickly fill up your RAM when working with genome-scale models. Here, the `batch`

method of the sampler objects might come in handy. `batch`

takes two arguments, the number of samples in each batch and the number of batches. This will make sense with a small example.

Let's assume we want to quantify what proportion of our samples will grow. For that we might want to generate 10 batches of 50 samples each and measure what percentage of the individual 100 samples show a growth rate larger than 0.1. Finally, we want to calculate the mean and standard deviation of those individual percentages.

```
In [11]:
```counts = [np.mean(s.Biomass_Ecoli_core > 0.1) for s in optgp.batch(100, 10)]
print("Usually {:.2f}% +- {:.2f}% grow...".format(
np.mean(counts) * 100.0, np.std(counts) * 100.0))

```
```

```
In [12]:
```co = model.problem.Constraint(model.reactions.Biomass_Ecoli_core.flux_expression, lb=0.1)
model.add_cons_vars([co])

```
In [13]:
```s = sample(model, 10)
print(s.Biomass_Ecoli_core)

```
```

As we can see our new constraint was respected.