Example Usage of HDFWriter

If properties of a class needs to be saved in a hdf file, then the class should inherit from HDFWriterMixin as demonstrated below.

hdf_properties (list) : Contains names of all the properties that needs to be saved.
hdf_name (str) : Specifies the default name of the group under which the properties will be saved.


In [1]:
from tardis.io.util import HDFWriterMixin

class ExampleClass(HDFWriterMixin):
    hdf_properties = ['property1', 'property2']
    hdf_name = 'mock_setup'
    def __init__(self, property1, property2):
        self.property1 = property1
        self.property2 = property2

In [2]:
import numpy as np
import pandas as pd

#Instantiating Object
property1 = np.array([4.0e14, 2, 2e14, 27.5])
property2 = pd.DataFrame({'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
                        'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])})
obj = ExampleClass(property1, property2)

You can now save properties using to_hdf method.

Parameters

file_path : Path where the HDF file will be saved
path : Path inside the HDF store to store the elements
name : Name of the group inside HDF store, under which properties will be saved.
If not specified , then it uses the value specified in hdf_name attribute.
If hdf_name is also not defined , then it converts the Class name into Snake Case, and uses this value.
Like for example , if name is not passed as an argument , and hdf_name is also not defined for ExampleClass above, then , it will save properties under example_class group.


In [3]:
obj.to_hdf(file_path='test.hdf', path='test')
#obj.to_hdf(file_path='test.hdf', path='test', name='hdf')

You can now read hdf file using pd.HDFStore , or pd.read_hdf


In [4]:
#Read HDF file
with pd.HDFStore('test.hdf','r') as data:
    print data
    #print data['/test/mock_setup/property1']


<class 'pandas.io.pytables.HDFStore'>
File path: test.hdf
/test/mock_setup/property1            series       (shape->[4])  
/test/mock_setup/property2            frame        (shape->[4,2])

Saving nested class objects.

Just extend hdf_properties list to include that class object.


In [5]:
class NestedExampleClass(HDFWriterMixin):
    hdf_properties = ['property1', 'nested_object']
    def __init__(self, property1, nested_obj):
        self.property1 = property1
        self.nested_object = nested_obj

In [6]:
obj2 = NestedExampleClass(property1, obj)

In [7]:
obj2.to_hdf(file_path='nested_test.hdf')

In [8]:
#Read HDF file
with pd.HDFStore('nested_test.hdf','r') as data:
    print data


<class 'pandas.io.pytables.HDFStore'>
File path: nested_test.hdf
/nested_example_class/nested_object/property1            series       (shape->[4])  
/nested_example_class/nested_object/property2            frame        (shape->[4,2])
/nested_example_class/property1                          series       (shape->[4])  

Modifed Usage

In BasePlasma class, the way properties of object are collected is different. It does not uses hdf_properties attribute.
That`s why , PlasmaWriterMixin (which extends HDFWriterMixin) changes how the properties of BasePlasma class will be collected, by changing get_properties function.

Here is a quick demonstration, if behaviour of default get_properties function inside HDFWriterMixin needs to be changed, by subclassing it to create a new Mixin.


In [9]:
class ModifiedWriterMixin(HDFWriterMixin):
    def get_properties(self):
        #Change behaviour here, how properties will be collected from Class
        data = {name: getattr(self, name) for name in self.outputs}
        return data

A demo class , using this modified mixin.


In [10]:
class DemoClass(ModifiedWriterMixin):
    outputs = ['property1']
    hdf_name = 'demo'
    def __init__(self, property1):
        self.property1 = property1

In [11]:
obj3 = DemoClass('random_string')
obj3.to_hdf('demo_class.hdf')
with pd.HDFStore('demo_class.hdf','r') as data:
    print data


<class 'pandas.io.pytables.HDFStore'>
File path: demo_class.hdf
/demo/scalars            series       (shape->[1])