First, import the processing tools that contain classes and methods to read, plot and process standard unit particle distribution files.
In [1]:
import processing_tools as pt
The module consists of a class 'ParticleDistribution' that initializes to a dictionary containing the following entries given a filepath:
key | value |
---|---|
'x' | x position |
'y' | y position |
'z' | z position |
'px' | x momentum |
'py' | y momentum |
'pz' | z momentum |
'NE' | number of electrons per macroparticle |
The units are in line with the Standard Unit specifications, but can be converted to SI by calling the class method SU2SI
Values can then be called by calling the 'dict':
In [2]:
filepath = './example/example.h5'
data = pt.ParticleDistribution(filepath)
data.su2si
data.dict['x']
Out[2]:
Alternatively one can ask for a pandas dataframe where each column is one of the above properties of a macroparticle per row.
In [4]:
panda_data = data.DistFrame()
panda_data[0:5]
Out[4]:
This allows for quick plotting using the inbuilt pandas methods
In [5]:
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot') #optional
x_axis = 'py'
y_axis = 'px'
plot = panda_data.plot(kind='scatter',x=x_axis,y=y_axis)
#sets axis limits
plot.set_xlim([panda_data[x_axis].min(),panda_data[x_axis].max()])
plot.set_ylim([panda_data[y_axis].min(),panda_data[y_axis].max()])
plt.show(plot)
If further statistical analysis is required, the class 'Statistics' is provided. This contains methods to process standard properties of the electron bunch. This is called by giving a filepath to 'Statistics' The following operations can be performed:
Function | Effect and dict keys |
---|---|
calc_emittance | Calculates the emittance of all the slices, accessible by 'e_x' and 'e_y' |
calc_CoM | Calculates the weighed averages and standard deviations per slice of every parameter and beta functions, see below for keys. |
calc_current | Calculates current per slice, accessible in the dict as 'current'. |
slice | Slices the data in equal slices of an integer number. |
This is a subclass of the ParticleDistribution and all the methods previously described work.
CoM Keys | Parameter (per slice) |
---|---|
CoM_x, CoM_y, CoM_z | Centre of mass of x, y, z positions |
std_x, std_y, std_z | Standard deviation of x, y, z positions |
CoM_px, CoM_py, CoM_pz | Centre of mass of x, y, z momenta |
std_px, std_py, std_pz | Standard deviation of x, y, z momenta |
beta_x, beta_y | Beta functions (assuming Gaussian distribution) in x and y |
Furthermore, there is a 'Step_Z' which returns the size of a slice as well as 'z_pos' which gives you central position of a given slice.
And from this class both the DistFrame (containing the same data as above) and StatsFrame can be called:
In [6]:
stats = pt.Statistics(filepath)
#preparing the statistics
stats.slice(100)
stats.calc_emittance()
stats.calc_CoM()
stats.calc_current()
#display pandas example
panda_stats = stats.StatsFrame()
panda_stats[0:5]
Out[6]:
In [7]:
ax = panda_stats.plot(x='z_pos',y='CoM_y')
panda_stats.plot(ax=ax, x='z_pos',y='std_y',c='b') #first option allows shared axes
plt.show()
And finally there is the FEL_Approximations which calculate simple FEL properties per slice. This is a subclass of statistics and as such every method described above is callable.
This class conatins the 'undulator' function that calculates planar undulator parameters given a period and either a peak magnetic field or K value.
The data must be sliced and most statistics have to be run before the other calculations can take place. These are 'pierce' which calculates the pierce parameter and 1D gain length for a given slice, 'gain length' which calculates the Ming Xie gain and returns three entries in the dict 'MX_gain', '1D_gain', 'pierce', which hold an array for these values per slice. 'FELFrame' returns a pandas dataframe with these and 'z_pos' for reference.
To make this easier, the class ProcessedData takes a filepath, number of slcies, undulator period, magnetic field or K and performs all the necessary steps automatically. As this is a subclass of FEL_Approximations all the values written above are accessible from here.
In [8]:
FEL = pt.ProcessedData(filepath,num_slices=100,undulator_period=0.00275,k_fact=2.7)
panda_FEL = FEL.FELFrame()
panda_stats= FEL.StatsFrame()
panda_FEL[0:5]
Out[8]:
If it is important to plot the statistical data alongside the FEL data, that can be easily achieved by concatinating the two sets as shown below
In [9]:
import pandas as pd
cat = pd.concat([panda_FEL,panda_stats], axis=1, join_axes=[panda_FEL.index]) #joins the two if you need to plot
#FEL parameters as well as slicel statistics on the same plot
cat['1D_gain']=cat['1D_gain']*40000000000 #one can scale to allow for visual comparison if needed
az = cat.plot(x='z_pos',y='1D_gain')
cat.plot(ax=az, x='z_pos',y='MX_gain',c='b')
plt.show()
In [ ]:
In [ ]: