0.1 Create or load a 'batch' file

This is what I planned to call a journal. An example is infotable.

  • from_db: create from database
  • from_file: load from file
  • create: make manually
  • to_file: save
  • paginate: create a suitable folder structure
class BaseJournal:
    """A journal keeps track of the details of the experiment.

    The journal should at a mimnimum conain information about the name and
    project the experiment has."""

    def __init__(self):
        self.pages = None  # pandas.DataFrame
        self.name = None #
        self.id = None # this 
        self.project = None
        self.parameters = None # could be its own class or a dict
        self._path_name = None

    def from_db(self):
        pass

    def from_file(self, file_name):
        pass

    def to_file(self, file_name=None):
        pass

    @property
    def path_name(self):
        pass

    def look_for_file(self):
        pass

-example session

experiment = CyclingExperiment()
#----setting some prms------
experiment.prms.Reader.some_parameter = True  # This will be valid globally
# or
cellpy.prms.Reader.some_parameter = True  # This will be valid globally
# or
experiment.settings.dynamic_linking = True  # This is only valid for this instance

#----generating the journal--
experiment.journal.from_db(project="Nano", name="exp001", batch_col=5)

...

print(experiment.journal)
experiment.journal.pages.head(10)  # show first 10 rows of the pages DataFrame
experiment.journal.to_file()

In [ ]:

0.2 Create appropriate cellpy-files

This applies if they are not made already or if they need updating (i.e. could have a method / function for this, for example .update(options) or cellpy.cellreader.update(journal))

-example session continued

# checking if the cellpy-files exist and loading step tables
experiment.link()
# or just run an update anyway
experiment.update()
experiment.status()

In [ ]:

1. Examining the experiment

1.1 Looking at the summaries

-example session continued

(batch analysing)

# forgotten what kind of analysers and engines this contains
experiment.info()
# journal:
#   standard
# exporters:
#   csv_exporter
#     - dialect: standard
#     - path: /processing/nano/exp001/*
#   origin_exporter
#     - dialect: sub_header
#     - path: /processing/nano/exp001/*
# plotters:
#   interactive
#     - backend: bokeh
# analysers:
#   ica
#   cycles
# reporters:
#   simple_html

# think I would like to add another analyser
from cellpy.batch.analysers import OCVAnalyser
my_ocv_analyser = OCVAnalyser(randles=2, splitting=True)
experiment.analysers.register("ocv", my_ocv_analyser)
experiment.analysers.run_all()
experiment.analysers.run(name="ocv")

(exploring)

from cellpy.batch.tools import get_cycles, get_ica, get_ocv_rlx, combine
voltage_cycles = get_cycles(experiment.data, "2018_very_good_cell_01", (1,2,3), tidy=True)
voltage_cycles.head()

# or work directly on the cellpydata
voltage_cycles = experiment.data["2018_very_good_cell_01"].get_cap(cycles = (1,2,3))
# etc.

charge_cap_frame = combine(experiment, "charge_capacity")
discharge_cap_frame = combine(experiment, "discharge_capacity")

In [ ]:


In [ ]:


In [ ]:

2.0 Exporting

opts = dict()
opts["blue"] = True
experiment.exporters.run(name="csv_exporter", data=charge_cap_frame, **opts)
experiment.exporters.run(name="csv_exporter", what="cycles")

experiment.exporters.run_all() # exports everything

3.0 Reporting

opts = dict()
opts["html5"] = True
experiment.reporters.run(name="simple_html", **opts)
experiment.reporters.run(name="notebook")
experiment.reporters.run_all() # exports everything