In [1]:
# Hidden TimeStamp
import time, datetime
st = datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')
print('Last Run: {}'.format(st))
In [2]:
# Hidden Working Directory
# Run this cell only once
from IPython.display import clear_output
%cd ../
clear_output()
In [3]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
# Hidden Versioning
import numpy, matplotlib, pandas, six, openpyxl, xlrd, version_information, lamana
%reload_ext version_information
%version_information numpy, matplotlib, pandas, six, openpyxl, xlrd, version_information, lamana
Out[3]:
In [4]:
# Hidden Namespace Reset
%reset -sf
%whos
This notebook is pinned. It's should not be run at at time beyond pre-dev phase. This is design to test regression between session through a dev-release cycle.
The following demonstration includes basic and intermediate uses of the LamAna Project library. It is intended to exhaustively reference all API features, therefore some advandced demonstrations will favor technical detail.
In [5]:
#------------------------------------------------------------------------------
import pandas as pd
import lamana as la
#import LamAna as la
%matplotlib inline
#%matplotlib nbagg
# PARAMETERS ------------------------------------------------------------------
# Build dicts of geometric and material parameters
load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'r' : 2e-4, # radial distance from center loading
'P_a' : 1, # applied load
'p' : 5, # points/layer
}
# Quick Form: a dict of lists
mat_props = {'HA' : [5.2e10, 0.25],
'PSu' : [2.7e9, 0.33],
}
# Standard Form: a dict of dicts
# mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
# 'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# What geometries to test?
# Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle
# Current Style
g1 = ('0-0-2000') # Monolith
g2 = ('1000-0-0') # Bilayer
g3 = ('600-0-800') # Trilayer
g4 = ('500-500-0') # 4-ply
g5 = ('400-200-800') # Short-hand; <= 5-ply
g6 = ('400-200-400S') # Symmetric
g7 = ('400-[200]-800') # General convention; 5-ply
g8 = ('400-[100,100]-800') # General convention; 7-plys
g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys
'''Add to test set'''
g13 = ('400-[150,50]-800') # Dissimilar inner_is
g14 = ('400-[25,125,50]-800')
geos_most = [g1, g2, g3, g4, g5]
geos_special = [g6, g7, g8, g9]
geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9]
geos_dissimilar = [g13, g14]
# Future Style
#geos1 = ((400-400-400),(400-200-800),(400-350-500)) # same total thickness
#geos2 = ((400-400-400), (400-500-1600), (400-200-800)) # same outer thickness
In [6]:
#import pandas as pd
pd.set_option('display.max_columns', 10)
pd.set_option('precision', 4)
In [7]:
case1 = la.distributions.Case(load_params, mat_props) # instantiate a User Input Case Object through distributions
case1.apply(['400-200-800'])
case1.plot()
That's it! The rest of this demonstration showcases API functionality of the LamAna project.
Passed in arguments are acessible, but can be displayed as pandas Series and DataFrames.
In [8]:
# Original
case1.load_params
Out[8]:
In [9]:
# Series View
case1.parameters
Out[9]:
In [10]:
# Original
case1.mat_props
Out[10]:
In [11]:
# DataFrame View
case1.properties
Out[11]:
In [12]:
# Equivalent Standard Form
case1.properties.to_dict()
Out[12]:
Reset material order. Changes are relfected in the properties view and stacking order.
In [13]:
case1.materials = ['PSu', 'HA']
case1.properties
Out[13]:
Serial resets
In [14]:
case1.materials = ['PSu', 'HA', 'HA']
case1.properties
Out[14]:
In [15]:
case1.materials # get reorderd list of materials
Out[15]:
In [16]:
case1._materials
Out[16]:
In [17]:
case1.apply(geos_full)
In [18]:
case1.snapshots[-1]
Out[18]:
In [19]:
'''Need to bypass pandas abc ordering of indicies.'''
Out[19]:
Reset the parameters
In [20]:
mat_props2 = {'HA' : [5.3e10, 0.25],
'PSu' : [2.8e9, 0.33],
}
In [21]:
case1 = la.distributions.Case(load_params, mat_props2)
case1.properties
Out[21]:
Construct a laminate using geometric, matrial paramaters and geometries.
In [22]:
case2 = la.distributions.Case(load_params, mat_props)
case2.apply(geos_full) # default model Wilson_LT
Access the user input geometries
In [23]:
case2.Geometries # using an attribute, __repr__
Out[23]:
In [24]:
print(case2.Geometries) # uses __str__
In [25]:
case2.Geometries[0] # indexing
Out[25]:
We can compare Geometry objects with builtin Python operators. This process directly compares GeometryTuples in the Geometry
class.
In [26]:
bilayer = case2.Geometries[1] # (1000.0-[0.0]-0.0)
trilayer = case2.Geometries[2] # (600.0-[0.0]-800.0)
#bilayer == trilayer
bilayer != trilayer
Out[26]:
Get all thicknesses for selected layers.
In [27]:
case2.middle
Out[27]:
In [28]:
case2.inner
Out[28]:
In [29]:
case2.inner[-1]
Out[29]:
In [30]:
case2.inner[-1][0] # List indexing allowed
Out[30]:
In [31]:
[first[0] for first in case2.inner] # iterate
Out[31]:
In [32]:
case2.outer
Out[32]:
A general and very important object is the LaminateModel.
In [33]:
case2.LMs
Out[33]:
Sometimes might you want to throw in a bunch of geometry strings from different groups. If there are repeated strings in different groups (set intersections), you can tell Case
to only give a unique result.
For instane, here we combine two groups of geometry strings, 5-plys and odd-plys. Clearly these two groups overlap, and there are some repeated geometries (one with different conventions). Using the unique
keyword, Case only operates on a unique set of Geometry
objects (independent of convention), resulting in a unique set of LaminateModels.
In [34]:
fiveplys = ['400-[200]-800', '350-400-500', '200-100-1400']
oddplys = ['400-200-800', '350-400-500', '400.0-[100.0,100.0]-800.0']
mix = fiveplys + oddplys
mix
Out[34]:
In [35]:
# Non-unique, repeated 5-plys
case_ = la.distributions.Case(load_params, mat_props)
case_.apply(mix)
case_.LMs
Out[35]:
In [36]:
# Unique
case_ = la.distributions.Case(load_params, mat_props)
case_.apply(mix, unique=True)
case_.LMs
Out[36]:
You can get a quick view of the stack using the snapshot
method. This gives access to a Construct
- a DataFrame converted stack.
In [37]:
case2.snapshots[-1]
Out[37]:
We can easily view entire laminate DataFrames using the frames
attribute. This gives access to LaminateModels
(DataFrame) objects, which extends the stack view so that laminate theory is applied to each row.
In [38]:
'''Consider head command for frames list'''
Out[38]:
In [39]:
#case2.frames
In [40]:
##with pd.set_option('display.max_columns', None): # display all columns, within this context manager
## case2.frames[5]
In [41]:
case2.frames[5].head()
Out[41]:
In [42]:
'''Extend laminate attributes'''
Out[42]:
In [43]:
case3 = la.distributions.Case(load_params, mat_props)
case3.apply(geos_dissimilar)
#case3.frames
NOTE, for even plies, the material is set alternate for each layer. Thus outers layers may be different materials.
In [44]:
case4 = la.distributions.Case(load_params, mat_props)
case4.apply(['400-[100,100,100]-0'])
case4.frames[0][['layer', 'matl', 'type']]
;
Out[44]:
In [45]:
'''Add functionality to customize material type.'''
Out[45]:
The distributions.Case
class has useful properties available for totaling specific layers for a group of laminates as lists. As these properties return lists, these results can be sliced and iterated.
In [46]:
'''Show Geometry first then case use.'''
Out[46]:
In [47]:
case2.total
Out[47]:
In [48]:
case2.total_middle
Out[48]:
In [49]:
case2.total_middle
Out[49]:
In [50]:
case2.total_inner_i
Out[50]:
In [51]:
case2.total_outer
Out[51]:
In [52]:
case2.total_outer[4:-1] # slicing
Out[52]:
In [53]:
[inner_i[-1]/2.0 for inner_i in case2.total_inner_i] # iterate
Out[53]:
The total attribute used in Case actually dervive from attributes for Geometry objects individually. On Geometry objects, they return specific thicknesses instead of lists of thicknesses.
In [54]:
G1 = case2.Geometries[-1]
G1
Out[54]:
In [55]:
G1.total # laminate thickness (um)
Out[55]:
In [56]:
G1.total_inner_i # inner_i laminae
Out[56]:
In [57]:
G1.total_inner_i[0] # inner_i lamina pair
Out[57]:
In [58]:
sum(G1.total_inner_i) # inner total
Out[58]:
In [59]:
G1.total_inner # inner total
Out[59]:
Access the LaminateModel object directly using the LMs
attribute.
In [60]:
case2.LMs[5].Middle
Out[60]:
In [61]:
case2.LMs[5].Inner_i
Out[61]:
Laminates are assumed mirrored at the neutral axis, but dissimilar inner_i thicknesses are allowed.
In [62]:
case2.LMs[5].tensile
Out[62]:
Separate from the case attributes, Laminates have useful attributes also, such as nplies
, p
and its own total
.
In [63]:
LM = case2.LMs[4]
LM.LMFrame.tail(7)
Out[63]:
Often the extreme stress values (those at the interfaces) are most important. This is equivalent to p=2.
In [64]:
LM.extrema
Out[64]:
In [65]:
LM.p # number of rows per group
Out[65]:
In [66]:
LM.nplies # number of plies
Out[66]:
In [67]:
LM.total # total laminate thickness (m)
Out[67]:
In [68]:
LM.Geometry
Out[68]:
In [69]:
'''Overload the min and max special methods.'''
Out[69]:
In [70]:
LM.max_stress # max interfacial failure stress
Out[70]:
NOTE: this feature gives a different result for p=1 since a single middle cannot report two interfacial values; INDET.
In [71]:
LM.min_stress
Out[71]:
In [72]:
'''Redo tp return series of bool an index for has_attrs'''
Out[72]:
In [73]:
LM.has_neutaxis
Out[73]:
In [74]:
LM.has_discont
Out[74]:
In [75]:
LM.is_special
Out[75]:
In [76]:
LM.FeatureInput
Out[76]:
In [77]:
'''Need to fix FeatureInput and Geometry inside LaminateModel'''
Out[77]:
As with Geometry objects, we can compare LaminateModel objects also. This process directly compares two defining components of a LaminateModel object: the LM DataFrame (LMFrame
) and FeatureInput. If either is False, the equality returns False
.
In [78]:
case2 = la.distributions.Case(load_params, mat_props)
case2.apply(geos_full)
In [79]:
bilayer_LM = case2.LMs[1]
trilayer_LM = case2.LMs[2]
trilayer_LM == trilayer_LM
#bilayer_LM == trilayer_LM
Out[79]:
In [80]:
bilayer_LM != trilayer_LM
Out[80]:
Use python and pandas native comparison tracebacks that to understand the errors directly by comparing FeatureInput dict and LaminateModel DataFrame.
In [81]:
#bilayer_LM.FeatureInput == trilayer_LM.FeatureInput # gives detailed traceback
In [82]:
'''Fix FI DataFrame with dict.'''
Out[82]:
In [83]:
bilayer_LM.FeatureInput
Out[83]:
In [84]:
#bilayer_LM.LMFrame == trilayer_LM.LMFrame # gives detailed traceback
CAVEAT: it is recommended to use at least p=2 for calculating stress. Less than two points for odd plies is indeterminant in middle rows, which can raise exceptions.
In [85]:
'''Find a way to remove all but interfacial points.'''
Out[85]:
We try to quickly plot simple stress distriubtions with native pandas methods. We have two variants for displaying distributions:
- Unnoormalized: plotted by the height (`d_`). Visaully: thicknesses vary, material slopes are constant.
- Normalized: plotted by the relative fraction level (`k_`). Visually: thicknesses are constant, material slopes vary.
Here we plot with the nbagg matplotlib backend to generatre interactive figures. NOTE: for Normalized plots, slope can vary for a given material.
In [86]:
from lamana.utils import tools as ut
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
#%matplotlib nbagg
# Quick plotting
case4 = ut.laminator(dft.geos_standard)
for case in case4.values():
for LM in case.LMs:
df = LM.LMFrame
df.plot(x='stress_f (MPa/N)', y='d(m)', title='Unnormalized Distribution')
df.plot(x='stress_f (MPa/N)', y='k', title='Normalized Distribution')
Out[86]:
While we get reasonable stress distribution plots rather simply, LamAna offers some plotting methods pertinent to laminates than assisting with visualization.
Demo - An example illustration of desired plotting of multiple geometries from distributions
.
This is image of results from legacy code used for comparison.
We can plot the stress distribution for a case of a single geometry.
In [87]:
case3 = la.distributions.Case(load_params, mat_props)
case3.apply(['400-200-800'], model='Wilson_LT')
case3.plot()
We can also plot multiple geometries of similar total thickness.
In [88]:
five_plies = ['350-400-500', '400-200-800', '200-200-1200', '200-100-1400',
'100-100-1600', '100-200-1400', '300-400-600']
case4 = la.distributions.Case(load_params, mat_props)
case4.apply(five_plies, model='Wilson_LT')
case4.plot()
In [89]:
'''If different plies or patterns, make new caselet (subplot)'''
'''[400-200-800, '300-[400,200]-600'] # non-congruent? equi-ply'''
'''[400-200-800, '400-200-0'] # odd/even ply'''
# currently superimposes plots. Just needs to separate.
Out[89]:
Saving data is critical for future analysis. LamAna offers two formas for exporting your data and parameters. Parameters used to make calculations such as the FeatureInput information are saved as "dashboards" in different forms.
The lowest level to export data is for a LaminateModel object.
In [90]:
LM = case4.LMs[0]
LM.to_xlsx(delete=True) # or `to_csv()`
Out[90]:
The latter LaminateModel data was saved to an .xlsx file in the default export folder. The filepath is returned (currently suppressed with the ;
line).
The next level to export data is for a case. This will save all files comprise in a case. If exported to csv format, files are saved seperately. In xlsx format, a single file is made where each LaminateModel data and dashboard are saved as seperate worksheets.
In [91]:
case4.to_xlsx(temp=True, delete=True) # or `to_csv()`
Out[91]:
So far, the barebones objects have been discussed and a lot can be accomplished with the basics. For users who have some experience with Python and Pandas, here are some intermediate techniques to reduce repetitious actions.
This section dicusses the use of abstract base classes intended for reducing redundant tasks such as multiple case creation and default parameter definitions. Custom model subclassing is also discussed.
In [92]:
#------------------------------------------------------------------------------
import pandas as pd
import lamana as la
%matplotlib inline
#%matplotlib nbagg
# PARAMETERS ------------------------------------------------------------------
# Build dicts of loading parameters and and material properties
load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'r' : 2e-4, # radial distance from center loading
'P_a' : 1, # applied load
'p' : 5, # points/layer
}
# # Quick Form: a dict of lists
# mat_props = {'HA' : [5.2e10, 0.25],
# 'PSu' : [2.7e9, 0.33],}
# Standard Form: a dict of dicts
mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# What geometries to test?
# Make tuples of desired geometeries to analyze: outer - {inner...-....}_i - middle
# Current Style
g1 = ('0-0-2000') # Monolith
g2 = ('1000-0-0') # Bilayer
g3 = ('600-0-800') # Trilayer
g4 = ('500-500-0') # 4-ply
g5 = ('400-200-800') # Short-hand; <= 5-ply
g6 = ('400-200-400S') # Symmetric
g7 = ('400-[200]-800') # General convention; 5-ply
g8 = ('400-[100,100]-800') # General convention; 7-plys
g9 = ('400-[100,100]-400S') # General and Symmetric convention; 7-plys
'''Add to test set'''
g13 = ('400-[150,50]-800') # Dissimilar inner_is
g14 = ('400-[25,125,50]-800')
geos_most = [g1, g2, g3, g4, g5]
geos_special = [g6, g7, g8, g9]
geos_full = [g1, g2, g3, g4, g5, g6, g7, g8, g9]
geos_dissimilar = [g13, g14]
This is brief introduction to underlying objects in this package. We begin with an input string that is parsed and converted into a Geometry object. This is part of the input_
module.
In [93]:
# Geometry object
la.input_.Geometry('100-200-1600')
Out[93]:
This object has a number of handy methods. This information is shipped with parameters and properties in FeatureInput
. A FeatureInput
is simply a dict. This currently does have not an official class but is it import for other objects.
In [94]:
# FeatureInput
FI = {
'Geometry': la.input_.Geometry('400.0-[200.0]-800.0'),
'Materials': ['HA', 'PSu'],
'Model': 'Wilson_LT',
'Parameters': load_params,
'Properties': mat_props,
'Globals': None,
}
The following objects are serially inherited and part of the constructs
module. These construct the DataFrame represention of a laminate. The code to decouple LaminateModel from Laminate was merged in verions 0.4.13.
In [95]:
# Stack object
la.constructs.Stack(FI)
Out[95]:
In [96]:
# Laminate object
la.constructs.Laminate(FI)
Out[96]:
In [97]:
# LaminateModel object
la.constructs.LaminateModel(FI)
Out[97]:
The latter cells verify these objects are successfully decoupled. That's all for now.
We've already seen we can generate a case object and plots with three lines of code. However, sometimes it is necessary to generate different cases. These invocations can be tedious with three lines of code per case. Have no fear. A simple way to produce more cases is to instantiate a Cases
object.
Below we will create a Cases
which houses multiples cases that:
In [98]:
cases1 = la.distributions.Cases(['400-200-800', '350-400-500',
'400-200-0', '1000-0-0'],
load_params=load_params,
mat_props=mat_props, model= 'Wilson_LT',
ps=[3,4,5])
cases1
Out[98]:
Cases()
accepts a list of geometry strings. Given appropriate default keywords, this lone argument will return a dict-like object of cases with indicies as keys. The model
and ps
keywords have default values.
A Cases()
object has some interesting characteristics (this is not a dict):
Defaults()
to simplify instantiations
In [99]:
# Gettable
cases1[0] # normal dict key selection
cases1[-1] # negative indices
cases1[-2] # negative indicies
Out[99]:
In [100]:
# Sliceable
cases1[0:2] # range of dict keys
cases1[0:3] # full range of dict keys
cases1[:] # full range
cases1[1:] # start:None
cases1[:2] # None:stop
cases1[:-1] # None:negative index
cases1[:-2] # None:negative index
#cases1[0:-1:-2] # start:stop:step; NotImplemented
#cases1[::-1] # reverse; NotImplemented
Out[100]:
In [101]:
# Viewable
cases1
cases1.LMs
Out[101]:
In [102]:
# Iterable
for i, case in enumerate(cases1): # __iter__ values
print(case)
#print(case.LMs) # access LaminateModels
In [103]:
# Writable
#cases1.to_csv() # write to file
In [104]:
# Selectable
cases1.select(nplies=[2,4]) # by # plies
cases1.select(ps=[3,4]) # by points/DataFrame rows
cases1.select(nplies=[2,4], ps=[3,4], how='intersection') # by set operations
Out[104]:
LamainateModels can be compared using set theory. Unique subsets of LaminateModels can be returned from a mix of repeated geometry strings. We will use the default model
and ps
values.
In [105]:
set(geos_most).issubset(geos_full) # confirm repeated items
Out[105]:
In [106]:
mix = geos_full + geos_most # contains repeated items
In [107]:
# Repeated Subset
cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props)
cases2.LMs
Out[107]:
In [108]:
# Unique Subset
cases2 = la.distributions.Cases(mix, load_params=load_params, mat_props=mat_props,
unique=True)
cases2.LMs
Out[108]:
We observed the benefits of using implicit, default keywords (models
, ps
) in simplifying the writing of Cases()
instantiations. In general, the user can code explicit defaults for load_params
and mat_props
by subclassing BaseDefaults()
from inputs_
. While subclassing requires some extra Python knowledge, this is a relatively simple process that reduces a significant amount of redundant code, leading to a more effiencient anaytical setting.
The BaseDefaults
contains a dict various geometry strings and Geometry objects. Rather than defining examples for various geometry plies, the user can call from all or a groupings of geometries.
In [109]:
from lamana.input_ import BaseDefaults
bdft = BaseDefaults()
# geometry String Attributes
bdft.geo_inputs # all dict key-values
bdft.geos_all # all geo strings
bdft.geos_standard # static
bdft.geos_sample # active; grows
# Geometry Object Attributes; mimics latter
bdft.Geo_objects # all dict key-values
bdft.Geos_all # all Geo objects
# more ...
# Custom FeatureInputs
#bdft.get_FeatureInput() # quick builds
#bdft.get_materials() # convert to std. form
Out[109]:
The latter geometric defaults come out of the box when subclassed from BaseDefaults
. If custom geometries are desired, the user can override the geo_inputs
dict, which automatically builds the Geo_objects
dict.
Users can override three categories of defaults parameters:
As mentioned, some geometric variables are provided for general laminate dimensions. The other parameters cannot be predicted and need to be defined by the user. Below is an example of a Defaults() subclass. If a custom model has been implemented (see next section), it is convention to place Defaults()
and all other custom code within this module. If a custom model is implemented an located in the models directory, Cases will automatically search will the designated model modules, locate the load_params
and mat_props
attributes and load them automatically for all Cases
instantiations.
In [110]:
# Example Defaults from LamAna.models.Wilson_LT
class Defaults(BaseDefaults):
'''Return parameters for building distributions cases. Useful for consistent
testing.
Dimensional defaults are inheirited from utils.BaseDefaults().
Material-specific parameters are defined here by he user.
- Default geometric and materials parameters
- Default FeatureInputs
Examples
========
>>>dft = Defaults()
>>>dft.load_params
{'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,}
>>>dft.mat_props
{'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
>>>dft.FeatureInput
{'Geometry' : '400-[200]-800',
'Geometric' : {'R' : 12e-3, 'a' : 7.5e-3, 'p' : 1, 'P_a' : 1, 'r' : 2e-4,},
'Materials' : {'HA' : [5.2e10, 0.25], 'PSu' : [2.7e9, 0.33],},
'Custom' : None,
'Model' : Wilson_LT,
}
'''
def __init__(self):
BaseDefaults.__init__(self)
'''DEV: Add defaults first. Then adjust attributes.'''
# DEFAULTS ------------------------------------------------------------
# Build dicts of geometric and material parameters
self.load_params = {'R' : 12e-3, # specimen radius
'a' : 7.5e-3, # support ring radius
'p' : 5, # points/layer
'P_a' : 1, # applied load
'r' : 2e-4, # radial distance from center loading
}
self.mat_props = {'Modulus': {'HA': 5.2e10, 'PSu': 2.7e9},
'Poissons': {'HA': 0.25, 'PSu': 0.33}}
# ATTRIBUTES ----------------------------------------------------------
# FeatureInput
self.FeatureInput = self.get_FeatureInput(self.Geo_objects['standard'][0],
load_params=self.load_params,
mat_props=self.mat_props,
##custom_matls=None,
model='Wilson_LT',
global_vars=None)
In [111]:
'''Use Classic_LT here'''
Out[111]:
In [112]:
from lamana.distributions import Cases
# Auto load_params and mat_params
dft = Defaults()
cases3 = Cases(dft.geos_full, model='Wilson_LT')
#cases3 = la.distributions.Cases(dft.geos_full, model='Wilson_LT')
cases3
Out[112]:
In [113]:
'''Refine idiom for importing Cases '''
Out[113]:
One of the most powerful feauteres of LamAna is the ability to define customized modifications to the Laminate Theory models.
Code for laminate theories (i.e. Classic_LT, Wilson_LT) are are located in the models directory. These models can be simple functions or sublclass from BaseModels
in the theories
module. Either approach is acceptable (see narrative docs for more details on creating custom models.
This ability to add custom code make this library extensibile to use a larger variety of models.
An example of multiple subplots is show below. Using a former case, notice each subplot is indepent, woth separate geometries for each. LamAna treats each subplot as a subset or "caselet":
In [114]:
cases1.plot(extrema=False)
Each caselet can also be a separate case, plotting multiple geometries for each as accomplished with Case
.
In [115]:
const_total = ['350-400-500', '400-200-800', '200-200-1200',
'200-100-1400', '100-100-1600', '100-200-1400',]
const_outer = ['400-550-100', '400-500-200', '400-450-300',
'400-400-400', '400-350-500', '400-300-600',
'400-250-700', '400-200-800', '400-0.5-1199']
const_inner = ['400-400-400', '350-400-500', '300-400-600',
'200-400-700', '200-400-800', '150-400-990',
'100-400-1000', '50-400-1100',]
const_middle = ['100-700-400', '150-650-400', '200-600-400',
'250-550-400', '300-400-500', '350-450-400',
'400-400-400', '450-350-400', '750-0.5-400']
case1_ = const_total
case2_ = const_outer
case3_ = const_inner
case4_ = const_middle
cases_ = [case1_, case2_, case3_, case4_]
In [116]:
cases3 = la.distributions.Cases(cases_, load_params=load_params,
mat_props=mat_props, model= 'Wilson_LT',
ps=[2,3])
In [117]:
cases3.plot(extrema=False)
See Demo notebooks for more examples of plotting.
In [118]:
'''Fix importing cases'''
Out[118]:
In [119]:
from lamana.distributions import Cases
The term "caselet" is defined in LPEP 003. Most importantly, the various types a caselet represents is handled by Cases
and discussed here. In 0.4.4b3+, caselets are contained in lists. LPEP entertains the idea of containing caselets in dicts.
In [120]:
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
%matplotlib inline
str_caselets = ['350-400-500', '400-200-800', '400-[200]-800']
list_caselets = [['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',],
['400-400-400', '400-200-800','350-400-500',],
['350-400-500']]
case1 = la.distributions.Case(dft.load_params, dft.mat_props)
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case3 = la.distributions.Case(dft.load_params, dft.mat_props)
case1.apply(['400-200-800', '400-[200]-800'])
case2.apply(['350-400-500', '400-200-800'])
case3.apply(['350-400-500', '400-200-800', '400-400-400'])
case_caselets = [case1, case2, case3]
mixed_caselets = [['350-400-500', '400-200-800',],
[['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',]],
[case1, case2,]
]
dict_caselets = {0: ['350-400-500', '400-200-800', '200-200-1200',
'200-100-1400', '100-100-1600', '100-200-1400'],
1: ['400-550-100', '400-500-200', '400-450-300',
'400-400-400', '400-350-500', '400-300-600'],
2: ['400-400-400', '350-400-500', '300-400-600',
'200-400-700', '200-400-800', '150-400-990'],
3: ['100-700-400', '150-650-400', '200-600-400',
'250-550-400', '300-400-500', '350-450-400'],
}
In [121]:
cases = Cases(str_caselets)
#cases = Cases(str_caselets, combine=True)
#cases = Cases(list_caselets)
#cases = Cases(list_caselets, combine=True)
#cases = Cases(case_caselets)
#cases = Cases(case_caselets, combine=True) # collapse to one plot
#cases = Cases(str_caselets, ps=[2,5])
#cases = Cases(list_caselets, ps=[2,3,5,7])
#cases = Cases(case_caselets, ps=[2,5])
#cases = Cases([], combine=True) # test raises
# For next versions
#cases = Cases(dict_caselets)
#cases = Cases(mixed_caselets)
#cases = Cases(mixed_caselets, combine=True)
cases
Out[121]:
In [122]:
cases.LMs
Out[122]:
In [123]:
'''BUG: Following cell raises an Exception in Python 2. Comment to pass nb reg test in pytest.'''
Out[123]:
In [124]:
cases.caselets
Out[124]:
In [125]:
'''get out tests from code'''
'''run tests'''
'''test set seletions'''
Out[125]:
In [126]:
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
cases = Cases(dft.geo_inputs['5-ply'], ps=[2,3,4])
In [127]:
len(cases) # test __len__
Out[127]:
In [128]:
cases.get(1) # __getitem__
Out[128]:
In [129]:
#cases[2] = 'test' # __setitem__; not implemented
In [130]:
cases[0] # select
Out[130]:
In [131]:
cases[0:2] # slice (__getitem__)
Out[131]:
In [132]:
del cases[1] # __delitem__
In [133]:
cases # test __repr__
Out[133]:
In [134]:
print(cases) # test __str__
In [135]:
cases == cases # test __eq__
Out[135]:
In [136]:
not cases != cases # test __ne__
Out[136]:
In [137]:
for i, case in enumerate(cases): # __iter__ values
print(case)
#print(case.LMs)
In [138]:
cases.LMs # peek inside cases
Out[138]:
In [139]:
cases.frames # get a list of DataFrames directly
Out[139]:
In [140]:
cases
Out[140]:
In [141]:
#cases.to_csv() # write to file
Cases
can check if caselet is unique by comparing the underlying geometry strings. Here we have a non-unique caselets of different types. We get unique results within each caselet using the unique
keyword. Notice, different caselets could have similar LaminateModels.
In [142]:
str_caselets = ['350-400-500', '400-200-800', '400-[200]-800']
str_caselets2 = [['350-400-500', '350-[400]-500'],
['400-200-800', '400-[200]-800']]
list_caselets = [['400-400-400', '400-[400]-400'],
['200-100-1400', '100-200-1400',],
['400-400-400', '400-200-800','350-400-500',],
['350-400-500']]
case1 = la.distributions.Case(dft.load_params, dft.mat_props)
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case3 = la.distributions.Case(dft.load_params, dft.mat_props)
case1.apply(['400-200-800', '400-[200]-800'])
case2.apply(['350-400-500', '400-200-800'])
case3.apply(['350-400-500', '400-200-800', '400-400-400'])
case_caselets = [case1, case2, case3]
The following cells attempt to print the LM objects. Cases objects unordered and thus print in random orders.
It is important to note that once set operations are performed, order is no longer a preserved. This is related to how Python handles hashes. This applies to Cases()
in two areas:
unique
keyword optionally invoked during instantiation.how
keyword within the Cases.select()
method.Gotcha: Although a Cases
instance is a dict, as if 0.4.4b3, it's __iter__
method has been overriden to iterate the values by default (not the keys as in Python). This choice was decided since keys are uninformative integers, while the values (curently cases )are of interest, which saves from typing .items() when interating a Cases
instance.
>>> cases = Cases()
>>> for i, case in cases.items() # python
>>> ... print(case)
>>> for case in cases: # modified
>>> ... print(case)
This behavior may change in future versions.
In [143]:
#----------------------------------------------------------+
In [144]:
# Iterating Over Cases
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
In [145]:
# Multiple cases, Multiple LMs
cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5)
for i, case in enumerate(cases): # iter case values()
print('Case #: {}'.format(i))
for LM in case.LMs:
print(LM)
print("\nYou iterated several cases (ps=[2,5]) comprising many LaminateModels.")
In [146]:
# A single case, single LM
cases = Cases(['400-[200]-800']) # a single case and LM (manual)
for i, case_ in enumerate(cases): # iter i and case
for LM in case_.LMs:
print(LM)
print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n")
In [147]:
# Single case, multiple LMs
cases = Cases(dft.geos_full) # auto, default p=5
for case in cases: # iter case values()
for LM in case.LMs:
print(LM)
print("\nYou iterated a single case of many LaminateModels.")
From cases, subsets of LaminateModels can be chosen. select
is a method that performs on and returns sets of LaminateModels. Plotting functions are not implement for this method directly, however the reulsts can be used to make new cases instances from which .plot()
is accessible. Example access techniques using Cases
.
cases
cases[0:2]
cases.LMs
cases.LMs[0:2]
cases.select(ps=[3,4])
In [148]:
# Iterating Over Cases
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
In [149]:
#geometries = set(dft.geos_symmetric).union(dft.geos_special + dft.geos_standard + dft.geos_dissimilar)
#cases = Cases(geometries, ps=[2,3,4])
cases = Cases(dft.geos_special, ps=[2,3,4])
# Reveal the full listdft.geos_specia
# for case in cases: # iter case values()
# for LM in case.LMs:
# print(LM)
In [150]:
# Test union of lists
#geometries
In [151]:
cases
Out[151]:
In [152]:
'''Right now a case shares p, size. cases share geometries and size.'''
Out[152]:
In [153]:
cases[0:2]
Out[153]:
In [154]:
'''Hard to see where these comem from. Use dict?'''
Out[154]:
In [155]:
cases.LMs
Out[155]:
In [156]:
cases.LMs[0:6:2]
cases.LMs[0:4]
Out[156]:
Selections from latter cases.
In [157]:
cases.select(nplies=[2,4])
Out[157]:
In [158]:
cases.select(ps=[2,4])
Out[158]:
In [159]:
cases.select(nplies=4)
Out[159]:
In [160]:
cases.select(ps=3)
Out[160]:
Set operations have been implemented in the selection method of Cases
which enables filtering of unique LaminateModels that meet given conditions for nplies
and ps
.
In [161]:
cases.select(nplies=4, ps=3) # union; default
Out[161]:
In [162]:
cases.select(nplies=4, ps=3, how='intersection') # intersection
Out[162]:
By default, difference is subtracted as set(ps) - set(nplies)
. Currently there is no implementation for the converse difference, but set operations still work.
In [163]:
cases.select(nplies=4, ps=3, how='difference') # difference
Out[163]:
In [164]:
cases.select(nplies=4) - cases.select(ps=3) # set difference
Out[164]:
In [165]:
'''How does this work?'''
Out[165]:
In [166]:
cases.select(nplies=4, ps=3, how='symm diff') # symm difference
Out[166]:
In [167]:
cases.select(nplies=[2,4], ps=[3,4], how='union')
Out[167]:
In [168]:
cases.select(nplies=[2,4], ps=[3,4], how='intersection')
Out[168]:
In [169]:
cases.select(nplies=[2,4], ps=3, how='difference')
Out[169]:
In [170]:
cases.select(nplies=4, ps=[3,4], how='symmeric difference')
Out[170]:
Current logic seems to return a union.
Need logic to append LM for the following:
In [171]:
import numpy as np
a = []
b = 1
c = np.int64(1)
d = [1,2]
e = [1,2,3]
f = [3,4]
test = 1
test in a
#test in b
#test is a
test is c
# if test is a or test is c:
# True
Out[171]:
In [172]:
from lamana.utils import tools as ut
ut.compare_set(d, e)
ut.compare_set(b, d, how='intersection')
ut.compare_set(d, b, how='difference')
ut.compare_set(e, f, how='symmertric difference')
ut.compare_set(d, e, test='issubset')
ut.compare_set(e, d, test='issuperset')
ut.compare_set(d, f, test='isdisjoint')
Out[172]:
In [173]:
set(d) ^ set(e)
ut.compare_set(d,e, how='symm')
Out[173]:
In [174]:
g1 = dft.Geo_objects['5-ply'][0]
g2 = dft.Geo_objects['5-ply'][1]
In [ ]:
In [175]:
cases = Cases(dft.geos_full, ps=[2,5]) # two cases (p=2,5)
for i, case in enumerate(cases): # iter case values()
for LM in case.LMs:
print(LM)
In order to compare objects in sets, they must be hashable. The simple requirement equality is include whatever makes the hash of a equal to the hash of b. Ideally, we should hash the Geometry object, but the inner values is a list which is unhashable due to its mutability. Conventiently however, strings are not hashable. We can try to hash the geometry input string once they have been converted to General Convention as unique identifiers for the geometry object. This requires some reorganization in Geometry
.
_to_gen_convention()
geo_strings
_geo_strings
. This cannot be altered by the user.Here we see the advantage to using geo_strings as hashables. They are inheirently hashable.
UPDATE: decided to make a hashalbe version of the GeometryTuple
In [176]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash('400-200-800')
Out[176]:
In [177]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash('400-[200]-800')
Out[177]:
Need to make Laminate
class hashable. Try to use unique identifiers such as Geometry and p.
In [178]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((case.LMs[0].Geometry, case.LMs[0].p))
Out[178]:
In [179]:
case.LMs[0]
Out[179]:
In [180]:
L = [LM for case in cases for LM in case.LMs]
In [181]:
L[0]
Out[181]:
In [182]:
L[8]
Out[182]:
In [183]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((L[0].Geometry, L[0].p))
Out[183]:
In [184]:
#PYTEST_VALIDATE_IGNORE_OUTPUT
hash((L[1].Geometry, L[1].p))
Out[184]:
In [185]:
set([L[0]]) != set([L[8]])
Out[185]:
Use sets to filter unique geometry objects from Defaults()
.
In [186]:
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
mix = dft.Geos_full + dft.Geos_all
In [187]:
mix
Out[187]:
In [188]:
set(mix)
Out[188]:
See above. Looks like comparing the order of these lists give different results. This test has been quarantine from the repo until a solution is found.
In [189]:
mix = dft.geos_most + dft.geos_standard # 400-[200]-800 common to both
cases3a = Cases(mix, combine=True, unique=True)
cases3a.LMs
Out[189]:
In [190]:
load_params['p'] = 5
cases3b5 = la.distributions.Case(load_params, dft.mat_props)
cases3b5.apply(mix)
In [191]:
cases3b5.LMs[:-1]
Out[191]:
As we transition to more automated techniques, tf parameters are to be reused multiple times, it can be helpful to store them as default values.
In [192]:
'''Add how to build Defaults()'''
Out[192]:
In [193]:
# Case Building from Defaults
import lamana as la
from lamana.utils import tools as ut
from lamana.models import Wilson_LT as wlt
dft = wlt.Defaults()
##dft = ut.Defaults() # user-definable
case2 = la.distributions.Case(dft.load_params, dft.mat_props)
case2.apply(dft.geos_full) # multi plies
#LM = case2.LMs[0]
#LM.LMFrame
print("\nYou have built a case using user-defined defaults to set geometric \
loading and material parameters.")
case2
Out[193]:
Finally, if building several cases is required for the same parameters, we can use higher-level API tools to help automate the process.
Note, for every case that is created, a seperate Case()
instantiation and Case.apply()
call is required. These techniques obviate such redundancies.
In [194]:
# Automatic Case Building
import lamana as la
from lamana.utils import tools as ut
#Single Case
dft = wlt.Defaults()
##dft = ut.Defaults()
case3 = ut.laminator(dft.geos_full) # auto, default p=5
case3 = ut.laminator(dft.geos_full, ps=[5]) # declared
#case3 = ut.laminator(dft.geos_full, ps=[1]) # LFrame rollbacks
print("\nYou have built a case using higher-level API functions.")
case3
Out[194]:
In [195]:
# How to get values from a single case (Python 3 compatible)
list(case3.values())
Out[195]:
Cases are differentiated by different ps.
In [196]:
# Multiple Cases
cases1 = ut.laminator(dft.geos_full, ps=[2,3,4,5]) # multi ply, multi p
print("\nYou have built many cases using higher-level API functions.")
cases1
Out[196]:
In [197]:
# How to get values from multiple cases (Python 3 compatible)
list(cases1.values())
Out[197]:
Python 3 no longer returns a list for .values()
method, so list used to evalate a the dictionary view. While consuming a case's, dict value view with list()
works in Python 2 and 3, iteration with loops and comprehensions is a preferred technique for both single and mutiple case processing. After cases are accessed, iteration can access the contetnts of all cases. Iteration is the preferred technique for processing cases. It is most general, cleaner, Py2/3 compatible out of the box and agrees with The Zen of Python:
There should be one-- and preferably only one --obvious way to do it.
In [198]:
# Iterating Over Cases
# Latest style
case4 = ut.laminator(['400-[200]-800']) # a sinle case and LM
for i, case_ in case4.items(): # iter p and case
for LM in case_.LMs:
print(LM)
print("\nYou processed a case and LaminateModel w/iteration. (Recommended)\n")
case5 = ut.laminator(dft.geos_full) # auto, default p=5
for i, case in case5.items(): # iter p and case with .items()
for LM in case.LMs:
print(LM)
for case in case5.values(): # iter case only with .values()
for LM in case.LMs:
print(LM)
print("\nYou processed many cases using Case object methods.")
In [199]:
# Convert case dict to generator
case_gen1 = (LM for p, case in case4.items() for LM in case.LMs)
# Generator without keys
case_gen2 = (LM for case in case4.values() for LM in case.LMs)
print("\nYou have captured a case in a generator for later, one-time use.")
We will demonstrate comparing two techniques for generating equivalent cases.
In [200]:
# Style Comparisons
dft = wlt.Defaults()
##dft = ut.Defaults()
case1 = la.distributions.Case(load_params, mat_props)
case1.apply(dft.geos_all)
cases = ut.laminator(geos=dft.geos_all)
case2 = cases
# Equivalent calls
print(case1)
print(case2)
print("\nYou have used classic and modern styles to build equivalent cases.")