This example demonstrates the use of the homogenization model from pyMKS on a set of fiber-like structures. These structures are simulated to emulate fiber-reinforced polymer samples. For a summary of homogenization theory and its use with effective stiffness properties please see the Effective Siffness example. This example will first generate a series of random microstructures with various fiber lengths and volume fraction. The ability to vary the volume fraction is a new functionality of this example. Then the generated stuctures will be used to calibrate and test the model based on simulated effective stress values. Finally we will show that the simulated response compare favorably with those generated by the model.
In [1]:
import numpy as np
%matplotlib inline
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
Now we are defining the parameters which we will use to create the microstructures. n_samples
will determine how many microstructures of a particular volume fraction we want to create. size
determines the number of pixels we want to be included in the microstructure. We will define the material properties to be used in the finite element in elastic_modulus
, poissons_ratio
and macro_strain
. n_phases
and grain_size
will determine the physical characteristics of the microstructure. We are using a high aspect ratio in creating our microstructures to simulate fiber-like structures. The volume_fraction
variable will be used to vary the fraction of each phase. The sum of the volume fractions must be equal to 1. The percent_variance
variable introduces some variation in the volume fraction up to the specified percentage.
In [29]:
sample_size = 100
n_samples = 4 * [sample_size]
size = (101, 101)
elastic_modulus = (1.3, 75)
poissons_ratio = (0.42, .22)
macro_strain = 0.001
n_phases = 2
grain_size = [(40, 2), (10, 2), (2, 40), (2, 10)]
v_frac = [(0.7, 0.3), (0.6, 0.4), (0.3, 0.7), (0.4, 0.6)]
per_ch = 0.1
Now we will create the microstructures and generate their responses using the make_elastic_stress_random
function from pyMKS. Four datasets are created to create the four different volume fractions that we are simulating. Then the datasets are combined into one variable. The volume fractions are listed in the variable v_frac
. Variation around the specified volume fraction can be obtained by varying per_ch
. The variation is randomly generated according a uniform distribution around the specified volume fraction.
In [30]:
from pymks.datasets import make_elastic_stress_random
dataset, stresses = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size,
elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio,
macro_strain=macro_strain, volume_fraction=v_frac,
percent_variance=per_ch)
Now we are going to print out a few microstructres to look at how the fiber length, orientation and volume fraction are varied.
In [40]:
from pymks.tools import draw_microstructures
examples = dataset[::sample_size]
draw_microstructures(examples)
Next we are going to initiate the model. The MKSHomogenizationModel takes in microstructures and runs two-point statistics on them to get a statistical representation of the microstructures. An expalnation of the use of two-point statistics can be found in the Checkerboard Microstructure Example. Then the model uses PCA and regression models to create a linkage between the calcualted properties and structures.
Here we simply initiate the model.
In [33]:
from pymks import MKSHomogenizationModel
from pymks import PrimitiveBasis
p_basis = PrimitiveBasis(n_states=2, domain=[0, 1])
model = MKSHomogenizationModel(basis=p_basis, correlations=[(0, 0)])
Now we are going to split our data into testing and training segments so we can test and see if our model is effective. In a previous example we used sklearn to optimize the n_components
and degree
but for this example we will just use 4 components and a second order regression. Then we fit the model with our training data.
In [34]:
from sklearn.cross_validation import train_test_split
flat_shape = (dataset.shape[0],) + (np.prod(dataset.shape[1:]),)
data_train, data_test, stress_train, stress_test = train_test_split(
dataset.reshape(flat_shape), stresses, test_size=0.2, random_state=3)
model.n_components = 6
model.degree = 3
shapes = (data_test.shape[0],) + (dataset.shape[1:])
shapes2 = (data_train.shape[0],) + (dataset.shape[1:])
data_train = data_train.reshape(shapes2)
data_test = data_test.reshape(shapes)
model.fit(data_train, stress_train, periodic_axes=[0, 1])
In [41]:
from pymks.tools import draw_components
stress_predict = model.predict(data_test, periodic_axes=[0, 1])
draw_components([model.fit_data[:, :3],
model.reduced_predict_data[:, :3]],
['Training Data', 'Testing Data'])
It looks like there is pretty good agreement between the testing and the training data. We can also see that the four different fiber sizes are seperated in the PC space.
In [42]:
from pymks.tools import draw_goodness_of_fit
fit_data = np.array([stresses, model.predict(dataset, periodic_axes=[0, 1])])
pred_data = np.array([stress_test, stress_predict])
draw_goodness_of_fit(fit_data, pred_data, ['Training Data', 'Testing Data'])
Yay! There is a good corrolation between the FE results and those predicted by our linkage.
In [ ]: