In [2]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
Goal: Write the script to produce the bicorrelation plot. Here are the major steps in this work:
cced
files from subfolders (each a different measurement from the same measurement set)Note: I found a better way to do the data read-in, which I explore in bicorr_readin.ipynb
.
I need to parse the cced
file for each measurement set to produce a distribution of counts vs. $\Delta t_A$ vs. $\Delta t_B$ for two pairs of detectors.
Select two detectors in the array: A and B. Find events where there are triggers in the fission chamber, A, and B. Calculate the time of the interaction in A and B relative to the fission chamber as:
$$\Delta t_A = t_A - t_{FC}$$$$\Delta t_B = t_B - t_{FC}$$The money plot looks like this:
In [1]:
%%html
<img src="fig/plot_sketch.png",width=80%,height=80%>
In [1]:
# Import packages
import os.path
import time
import numpy as np
np.set_printoptions(threshold=np.nan) # print entire matrices
import sys
import inspect
import matplotlib.pyplot as plt
I am going to construct a new script called bicorr.py
, so set up some code to import and refresh bicorr.py
for when the time comes. I keep bicorr.py
in a separate folder called scripts
where I version control everything.
First I need to add the folder that contains bicorr.py
to my system path for this current python session. I wrote a tutorial with more detailed instructions on how to do this here: https://github.com/pfschus/python-tutorial-file-manipulation/blob/master/python_access_other_folders.ipynb
In [2]:
sys.path.append('../scripts/')
import bicorr as bicorr
If I make modifications to my bicorr.py
scripts while running this Jupyter notebook and I want to refresh all the functions, run the following:
In [3]:
%load_ext autoreload
%autoreload 2
cced1
file to my local machineI need to download part of the cced1
file.
To copy part of the cced1
file into another file, run the following command:
head -n 1000 cced1 > cced1_part
This copies the first 1000 lines to a new file called cced1_part
. I'm going to do that, but for 10,000 lines so I have a smaller file to work with while I write the bicorr
script.
The contents of the cced
file looks like:
1 7 2 60.43050003051757812500 1.65172 0.20165
1 40 1 -237.56460189819335937500 0.36266 0.03698
2 0 2 56.02870178222656250000 1.00000 0.37657
2 32 2 55.86729812622070312500 1.00000 0.38003
2 33 2 76.49300003051757812500 0.17479 0.03698
3 24 2 58.41870117187500000000 0.90767 0.12300
3 31 2 68.34080123901367187500 0.25033 0.04415
4 10 1 60.55670166015625000000 6.73639 0.82892
4 15 2 69.21060180664062500000 3.84989 0.48209
5 1 1 58.71749877929687500000 1.83345 0.16218
5 12 2 -7.76129913330078125000 0.25288 0.03376
6 4 2 57.22999954223632812500 0.44034 0.05057
6 6 2 10.50279998779296875000 0.27631 0.03217
7 32 2 55.97380065917968750000 1.00000 0.47541
7 44 2 67.53409957885742187500 0.42906 0.06017
8 17 2 56.86849975585937500000 0.08842 0.03812
8 27 2 55.96139907836914062500 1.63040 0.19290
8 30 2 106.28549957275390625000 3.32284 0.41720
9 41 2 56.37910079956054687500 0.28385 0.04117
9 42 1 57.03319931030273437500 1.57115 0.22264
10 22 2 58.14680099487304687500 0.84705 0.13257
10 28 2 325.81359863281250000000 0.42242 0.06293
11 0 2 52.17919921875000000000 1.00000 0.34562
11 14 1 64.22990036010742187500 0.25867 0.04041
12 16 2 56.74929809570312500000 1.00000 0.44734
12 24 1 128.21559906005859375000 1.11157 0.13403
13 37 2 59.00970077514648437500 1.79579 0.24159
13 47 2 58.27090072631835937500 0.86533 0.09690
14 32 2 55.12779998779296875000 1.00000 0.38950
14 46 2 64.88420104980468750000 1.73512 0.20411
Where the columns are:
evnum
chnum
part
time
totint
height
In [7]:
cleanFile = open('../datar/cced1_part','r')
In [8]:
print(cleanFile)
cced
files in subfoldersWhen I do the analysis on the cluster, I will have to get data out of subfolders separated by measurement. How do I do that? I created a new folder 1
on my local machine with cced1_part
in it, but then I renamed it to cced1
to match the filenames that I will encounter on the cluster. Try accessing it.
In [14]:
cleanFile = open("../datar/1/cced1",'r')
print(cleanFile)
cleanFile.close()
In [48]:
root_path = '../datar'
folder = 1
In [49]:
os.path.join(root_path + '/' + str(folder))
Out[49]:
In [44]:
os.listdir(root_path + '/' + str(folder))
Out[44]:
Set a default root_path == None
In [51]:
root_path = None
if root_path == None: root_path = os.getcwd()
folder = 1
In [52]:
os.path.join(root_path + '/' + str(folder))
Out[52]:
cced
file: Attempt 1, for line in cleanFile:
Use the method that Matthew used in his script crossCorr_v4.py
which also parses the cced
file. Read in the cced
file and look at its contents.
This strategy uses an iterator to go line-by-line. The trouble here is that you can't go back in time to previous lines, so I will have to store the event information from every line even if I don't know whether that event is a bicorrelation event. Also, there is no way to tell the iterator to only parse a few lines, or a set range of lines. It will always parse the entire file.
In [13]:
os.listdir('../datar/1')
Out[13]:
In [15]:
cleanFile = open("../datar/1/cced1",'r')
cleanFile.seek(0,0) # Go back to the beginning of the file (if you have been trying other things the current position may be later)
# What does the file look like?
for line in cleanFile:
holder = line.split()
if int(holder[0]) < 20: # Only look at first 20 events
print(holder)
cleanFile.close()
cced
file: Attempt 2, np.genfromtxt
Instead of reading the cced
file line by line using an iterator, pull in matches of the cced
file using the numpy function np.genfromtxt
. Documentation is here: https://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html
This function offers the capability of specifying the dtype
, which is the data type for each value along a row. Follow Matthew's method with ccedType
in his script.
In [16]:
# Set up formatting for pulling out data (copy from Matthew)
ccedType = np.dtype([('event', np.int32), ('detector', np.int8), ('particle_type', np.int8), ('time', np.float64), ('integral', np.float32), ('height', np.float32)])
# Try reading in the entire file at once
# In this case, I don't need to open the file using open()
data = np.genfromtxt("../datar/1/cced1",dtype=ccedType)
print(data[0:40])
Actually, in this case, I don't even have to open the cced
file before reading it. I can just specify the filename here. So that simplifies things.
I don't need this now, but I'm wondering whether it is possible to read in certain parts of the data, or can I only ever read the entire thing from start to finish? I should be able to use the option skip_header
and skip_footer
, the number of lines to skip at the beginning and end of the file. The command for this is:
data = np.genfromtxt("cced1_part",dtype=ccedType, skip_header=3, skip_footer=3)
I won't run it here because I don't want to overwrite the data
matrix I already read in.
This looks like a promising route, and better than the for line in ccedFile
iterator technique. Go with this.
Now that I have the data
matrix, pull out event information. The ccedType
specifies what all of the columns look like. As a reminder, the column names are:
event
detector
particle_type
time
integral
height
Play around with the indexing for a while to understand how to access certain information.
In [17]:
# Print all values for the first interaction
print('event', data[0]['event'])
print('detector', data[0]['detector'])
print('particle_type',data[0]['particle_type'])
print('time', data[0]['time'])
print('integral', data[0]['integral'])
print('height', data[0]['height'])
print( data[0])
In [18]:
# Print all values for the first five interactions
print(data[0:5])
In [19]:
# Print event numbers for event numbers less than 3
indices = np.argwhere(data[:]['event']<3)
print(indices)
print(data[indices]['event'])
data
and identify when the event number advancesI need to go through the events and find bicorrelation events. In that case there would be at least two detector interactions, and their corresponding fission chamber channels, in a single event. First set up a way to look at all interactions within a history.
Use an iteractor to go through one by one. According to stackoverflow, it is the most efficient way to go line by line through a giant matrix. Unlike with the read-in method for line in cleanFile
, all of the data is still stored in data
so the iterator just goes through the events and prompts when to look at data
.
First set up the framework for iterating through the data
matrix. Use the python function enumerate
to loop through the event numbers. If I set it up using the following syntax, python will keep track of the line number l
and event number e
together at once:
for l, e in enumerate(data[:]['event']):
For now just run through the first 20 lines.
In [20]:
# l is the line number of the current line, starting at 0.
# e is the event number of the current line, starting at 1
# eventNum is the current event number, extending from lines i to j.
eventNum = data[0]['event']; # Start with the first event in the data chunk.
# If reading entire file, this is 1.
# If reading a chunk, this may be higher.
i = 0; # First line number of first event is always 0
# This is a clever way of keeping track what line you're on. Enumerate through the event numbers, `e`, and python also keeps track of the line number `l`.
for l, e in enumerate(data[:20]['event']):
print("Reading line: ",l,"; event: ",e)
if e == eventNum: # Still on the same event
pass # Don't do anything.... but hold this here in case I want to add something
if e != eventNum: # Store info from currentEvent, move onto next event.
j = l # The last index of eventNum is the previous line
print("The line after event ", eventNum, "is ", j) # Event number advances
print("New event number is e=", e)
# This is where all of the event information will be pulled out.
eventNum = e # Move on to next event
i = l # Current line is the first line for next event
The rest of this analysis is not going to look as beautiful because I am essentially going to copy that last chunk of code and build in new capabilities over and over again. So be ready for a lot of repetition and print statements.
In [24]:
help(bicorr.build_ch_lists)
In [53]:
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag = True)
Each of the following target tasks will be indicated in the code below by including (TARGET #) in the comments.
i
and j
. (TARGET 1)
In [54]:
# l is the line number of the current line, starting at 0.
# e is the event number of the current line, starting at 1
# eventNum is the current event number, extending from lines i to j.
eventNum = data[0]['event']; # Start with the first event in the data chunk.
# If reading entire file, this is 1.
# If reading a chunk, this may be higher.
i = 0; # First line number of first event is always 0
I am going to make use of some boolean logic here. If chs_bool
is a boolean array, then taking the sum
of chs_bool
will count the number of True
instances.
In [55]:
# This is a clever way of keeping track of what line you're on. Enumerate through the event numbers, `e`, and python also keeps track of the line number `l`.
for l, e in enumerate(data[:40]['event']):
if e == eventNum: # Still on the same event
pass # Hold for later
if e != eventNum: # Store info from current event, move onto next event.
j = l # The last index of eventNum is the previous line
n_ints = j-i # Number interactions in current event
if n_ints > 2: # If there are more than 2 interactions in the event (TARGET 1)
ccedEvent = data[i:j][:] # Pull out the data from this event
print("ccedEvent:")
print(ccedEvent)
chs_present = ccedEvent[:]['detector'] # What channels triggered?
print("chs_present",chs_present)
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
print("chs_bool",chs_bool)
if sum(chs_bool)>1: # If at least two interactions were detectors / two or more Trues (TARGET 2)
# Did the corresponding fc's trigger?
dets_present = chs_present[chs_bool] # Which det channels triggered?
print(dets_present)
fc_corr = (16*np.floor(dets_present/16)).astype(int) # Corresponding fission chambers for each det channel
print("fc_corr",fc_corr)
fc_bool = np.in1d(fc_corr,chs_present) # Did the corresponding fission chamber trigger? (TARGET 3)
print("fc_bool",fc_bool)
if sum(fc_bool)>1: # If at least two detectors had corresponding fc triggers
# Only keep events where det and corresponding fc triggered
print("bicorrelation event!")
print(dets_present[fc_bool])
print(fc_corr[fc_bool])
eventNum = e # Move on to next event
i = l # Current line is the first line for next event
Event 16 is identified as a bicorrelation event. Now that it has been identified, extract relevant information.
So far, I have identified events that qualify as a bicorr event. Now I need to extract information from that event. I will start by extracting the channel numbers for the detectors that triggered.
Test out how to identify indices of elements in one array within another array. This could eventually be a subroutine.
In [56]:
# Generate a simple 'dummy' list to practice with
dets_present = np.array([9,46])
chs_present = np.array([0,9,32,46])
chs_bool = np.array([False,True,False,True])
fc_corr = np.array([0,32])
fc_bool = np.array([True,True])
det_indices = np.zeros(len(dets_present),dtype=np.int8) # What channels in chs_present correspond to these det chs
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # What channels in chs_present are the corresponding fc are these det chs
# Loop through all detectors present- determine the corresponding index in chs_present
# d is a counter 0, 1, 2...
for d in range(0,len(dets_present),1):
print('d = ', d)
print('det_ch = ', dets_present[d])
print('indices in chs_presnt = ', np.where(chs_present == dets_present[d])[0])
det_indices[d] = np.where(chs_present == dets_present[d])[0]
print('det_indices = ', det_indices)
fc_indices[d] = np.where(chs_present == fc_corr[d])[0]
print('fc_indices = ', fc_indices)
Incorporate this into the previous chunk of code.
In [57]:
for l, e in enumerate(data[:40]['event']):
# print("Reading line: ",l,"; event: ",e)
if e == eventNum: # Still on the same event
pass # Don't do anything.... but hold this here in case I want to add something
if e != eventNum: # Store info from current event, move onto next event.
j = l # The last index of eventNum is the previous line
n_ints = j-i # Number interactions in current event
if n_ints > 2: # If there are more than 2 interactions in the event
ccedEvent = data[i:j][:] # Pull out the data from this event
print("ccedEvent:")
print(ccedEvent)
chs_present = ccedEvent[:]['detector'] # What channels triggered?
print("chs_present",chs_present)
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
print("chs_bool",chs_bool)
if sum(chs_bool)>1: # If at least two interactions were detectors / two or more Trues
# Did the corresponding fc's trigger?
dets_present = chs_present[chs_bool] # Which det channels triggered?
print(dets_present)
fc_corr = 16*np.floor(dets_present/16) # Corresponding fission chambers for each det channel
fc_corr = fc_corr.astype(int) # Convert to int
print("fc_corr",fc_corr)
fc_bool = np.in1d(fc_corr,chs_present) # Did the corresponding fission chamber trigger?
print("fc_bool",fc_bool)
if sum(fc_bool)>1: # If at least two detectors had corresponding fc triggers
# Only keep events where det and corresponding fc triggered (fc_bool = True)
print("bicorrelation event!")
dets_present = dets_present[fc_bool]
print('dets_present:',dets_present)
fc_corr = fc_corr[fc_bool]
print('fc_corr:',fc_corr)
# Log dead events? Look for interactions where this happens
det_indices = np.zeros(len(dets_present),dtype=np.int8) # What channels in chs_present correspond to these det chs
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # What channels in chs_present are the corresponding fc are these det chs
print('det_indices = ',det_indices)
for d in range(0,len(dets_present),1): #(TARGET)
print('d = ', d)
print('det_ch = ', dets_present[d])
print('index in chs_present = ', np.where(chs_present == dets_present[d])[0])
det_indices[d] = np.where(chs_present == dets_present[d])[0]
print('det_indices = ', det_indices)
fc_indices[d] = np.where(chs_present == fc_corr[d])[0]
print('fc_indices = ', fc_indices)
eventNum = e # Move on to next event
i = l # Current line is the first line for next event
There are a few detector effects that make it so that there is a constant time offset between each detector pair. Matthew has calculated these time offsets (although they are not all accurate) so I need to implement a correction.
Follow the technique he uses in his crossCorr_v4.py
script. First explore what the timeOffsetData.txt
looks like.
In [65]:
root_path = '../datar'
In [66]:
os.listdir(root_path)
Out[66]:
In [69]:
os.listdir(os.path.join(root_path + '/' + str(folder)))
Out[69]:
In [70]:
folder = 1
timeOffsetData = np.genfromtxt(os.path.join(root_path + '/' + str(folder) + '/timeOffset.txt'))
In [75]:
plt.pcolormesh(chList,chList[:-1],timeOffsetData)
plt.xlabel('Detector 1')
plt.ylabel('Detector 2')
plt.title('Time offset between detector pairs (ns)')
plt.xlim([0,47])
plt.ylim([0,46])
plt.colorbar()
plt.axes().set_aspect('equal')
plt.show()
The syntax for calling a value from timeOffsetData
is:
timeOffsetData[d1][d2]
where d1
is the first detector channel number and d2
is the second detector channel number. In all cases, d1
must be less than d2
. The indices where d1
is greater than d2
are empty in timeDataOffset
.
Now I need to look at Matthew's crossCorr_v4.py
script and see how he implements timeOffsetData
. The lines where he implements it are lines 147-152:
if jDetIndex < iDetIndex:
timeOffset = -float(timeOffsetList[int(ccEvent['detector'][j])][int(ccEvent['detector'][i])])
else:
timeOffset = -float(timeOffsetList[int(ccEvent['detector'][i])][int(ccEvent['detector'][j])])
dT = float(ccEvent['time'][i]) - float(ccEvent['time'][j]) + timeOffset
I would like to make my implementation simpler. Matthew calculates a negative time offset, then adds that. But what makes it the most difficult to read is the indexing... rather than just a d1
or d2
number, Matthew uses int(ccEvent['detector'][j])
which is very lengthy.
Look back at how I calculate dt
. Is it possible to call the time offset values for all detectors at once?
I have the detector channels that triggered in a vector, and their corresponding fission chamber channels. Can I call them together?
In [76]:
print(dets_present)
print(fc_corr)
print(timeOffsetData[0][15])
print(timeOffsetData[fc_corr[0]][dets_present[0]])
It looks like I can't call two indices at once by just calling timeOffsetData[fc_corr][dets_present]
. Instead I need to create a new array of time offset values for the detectors involved in the interaction. Build it into the code I developed above (indicated by #(TARGET)
).
In [77]:
for l, e in enumerate(data[:100]['event']):
# print("Reading line: ",l,"; event: ",e)
if e == eventNum: # Still on the same event
pass # Don't do anything.... but hold this here in case I want to add something
if e != eventNum: # Store info from current event, move onto next event.
j = l # The last index of eventNum is the previous line
n_ints = j-i # Number interactions in current event
if n_ints > 2: # If there are more than 2 interactions in the event
ccedEvent = data[i:j][:] # Pull out the data from this event
chs_present = ccedEvent[:]['detector'] # What channels triggered?
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
if sum(chs_bool)>1: # If at least two interactions were detectors / two or more Trues
# Did the corresponding fc's trigger?
dets_present = chs_present[chs_bool] # Which det channels triggered?
fc_corr = 16*np.floor(dets_present/16) # Corresponding fission chambers for each det channel
fc_corr = fc_corr.astype(int) # Convert to int
fc_bool = np.in1d(fc_corr,chs_present) # Did the corresponding fission chamber trigger?
if sum(fc_bool)>1: # If at least two detectors had corresponding fc triggers
print('bicorrelation!')
print(ccedEvent)
# Only keep events where det and corresponding fc triggered (fc_bool = True)
dets_present = dets_present[fc_bool]
fc_corr = fc_corr[fc_bool]
det_indices = np.zeros(len(dets_present),dtype=np.int8) # What channels in chs_present correspond to these det chs
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # What channels in chs_present are the corresponding fc are these det chs
time_offset = np.zeros(len(dets_present),dtype=np.float64) #(TARGET)
for d in range(0,len(dets_present),1):
det_indices[d] = np.where(chs_present == dets_present[d])[0]
fc_indices[d] = np.where(chs_present == fc_corr[d])[0]
time_offset[d] = timeOffsetData[fc_corr[d]][dets_present[d]] #(TARGET)
print('det_indices',det_indices)
print('fc_indices',fc_indices)
# Store dt and particle type for each detector event
dt = np.zeros(len(dets_present),dtype=np.int8)
par_type = np.zeros(len(dets_present),dtype=np.int8)
# Correct for time offset
dt = ccedEvent[det_indices]['time']-ccedEvent[fc_indices]['time']+time_offset
print('dt',dt)
par_type = ccedEvent[det_indices]['particle_type']
print('par_type',par_type) # par_type 1=n, 2=p
eventNum = e # Move on to next event
i = l # Current line is the first line for next event
Check that the time offset worked. Look at event 16.
Without time offset correction:
ccedEvent:
[(16, 16, 2, 56.83219909667969, 1.0, 0.4312700033187866)
(16, 22, 2, 26.677501678466797, 0.8963900208473206, 0.13186000287532806)
(16, 30, 2, -75.39820098876953, 0.6321300268173218, 0.07415000349283218)
(16, 32, 2, 57.012699127197266, 1.0, 0.43167001008987427)
(16, 43, 2, 79.0250015258789, 0.2817400097846985, 0.05237000063061714)]
chs_present: [16 22 30 32 43]
dets_present: [22 30 43]
fc_corr: [16 16 32]
det_indices [1 2 4]
fc_indices [0 0 3]
dt [ -30.15469742 -132.23040009 22.0123024 ]
par_type [2 2 2]
With time offset correction:
ccedEvent:
[(16, 16, 2, 56.83219909667969, 1.0, 0.4312700033187866)
(16, 22, 2, 26.677501678466797, 0.8963900208473206, 0.13186000287532806)
(16, 30, 2, -75.39820098876953, 0.6321300268173218, 0.07415000349283218)
(16, 32, 2, 57.012699127197266, 1.0, 0.43167001008987427)
(16, 43, 2, 79.0250015258789, 0.2817400097846985, 0.05237000063061714)]
chs_present: [16 22 30 32 43]
dets_present: [22 30 43]
fc_corr: [16 16 32]
det_indices [1 2 4]
fc_indices [0 0 3]
dt [ -60.16549442 -120.95131209 32.3304754 ]
par_type [2 2 2]
The first detector is channel 22. What is the time offset between that channel and the corresponding fission chamber on channel 16?
In [78]:
print(timeOffsetData[16][22])
The difference in dt
with and without the time offset correction is adjusted by that amount, so call it correct. The question remains, should I be adding or subtracting the time offset? At first I was subtracting the time_offset
, but the $\gamma\gamma$ distribution was not centered at (0,0), so I changed it to adding.
bicorr
.Practice writing out variables to a text file. What information do I want to store from the bicorrelation event?
In [86]:
# Open a text file to write to
printFile = open('../datar/1/bicorr1','w')
print(ccedEvent[0]['event'])
print(dt)
print(dets_present)
print(par_type)
d1 = 0
d2 = 1
printFile.write(str(ccedEvent[0]['event']) + ' ' + str(dets_present[d1]) + ' ' + str(par_type[d1]) + ' ' + str(dt[d1])
+ ' ' + str(dets_present[d2]) + ' ' + str(par_type[d2]) + ' ' + str(dt[d2]))
printFile.close()
Print the contents of the file I just generated here:
In [87]:
with open('../datar/1/bicorr1') as f: # The with keyword automatically closes the file when you are done
print(f.read())
Now insert this into the loop.
In [88]:
# Open a text file to write to
printFile = open('../datar/1/bicorr1','w')
# Identify whether each event was a bicorrelation event by looking at the number of interactions and the channel numbers
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists(print_flag = True)
# l is the line number of the current line, starting at 0.
# e is the event number of the current line, starting at 1
# eventNum is the current event number. From lines i to j.
eventNum = data[0]['event']; # Start with the first event in the data chunk.
# If reading entire file, this is 1. If reading a chunk, this may be higher
i = 0; # First line number of first event is always 0
for l, e in enumerate(data[:]['event']):
if e == eventNum: # Still on the same event
pass
if e != eventNum: # Store info from current event, move onto next event.
j = l # The last index of eventNum is the previous line
n_ints = j-i # Number interactions in current event
if n_ints > 2: # If > 2 interactions in the event
ccedEvent = data[i:j][:] # Pull out the data from this event
chs_present = ccedEvent[:]['detector'] # What channels triggered?
chs_bool = np.in1d(chs_present,detList) # True = detector, False = fission chamber
if sum(chs_bool)>1: # If >2 detectors, did corr fc's trigger?
dets_present = chs_present[chs_bool] # Which det channels triggered?
fc_corr = (16*np.floor(dets_present/16)).astype(int) # Corr fc for each det ch
fc_bool = np.in1d(fc_corr,chs_present) # Did the corresponding fission chamber trigger?
if sum(fc_bool)>1: # If >2 det + corr fc, keep those
dets_present = dets_present[fc_bool]
fc_corr = fc_corr[fc_bool]
det_indices = np.zeros(len(dets_present),dtype=np.int8) # Where does chs_present = these det chs?
fc_indices = np.zeros(len(fc_corr),dtype=np.int8) # Where does chs_present = these fc chs?
time_offset = np.zeros(len(dets_present),dtype=np.float64) # Store time offset
for d in range(0,len(dets_present),1):
det_indices[d] = np.where(chs_present == dets_present[d])[0]
fc_indices[d] = np.where(chs_present == fc_corr[d])[0]
time_offset[d] = timeOffsetData[fc_corr[d]][dets_present[d]]
# Store dt and particle type for each detector event
dt = np.zeros(len(dets_present),dtype=np.int8)
par_type = np.zeros(len(dets_present),dtype=np.int8)
# Correct for time offset
dt = ccedEvent[det_indices]['time']-ccedEvent[fc_indices]['time']-time_offset
par_type = ccedEvent[det_indices]['particle_type']
# Write out event info from all detector pairs
for d1 in range(0,len(det_indices)-1,1):
for d2 in range(d1+1,len(det_indices),1):
printFile.write(str(ccedEvent[0]['event'])
+ ' ' + str(dets_present[d1]) + ' ' + str(par_type[d1]) + ' ' + str(dt[d1])
+ ' ' + str(dets_present[d2]) + ' ' + str(par_type[d2]) + ' ' + str(dt[d2])
+ '\n')
eventNum = e # Move on to next event
i = l # Current line is the first line for next event
printFile.close()
The bicorr
file that was produced has multiple lines for some events:
16 22 2 -30.1546974182 30 2 -132.230400085
16 22 2 -30.1546974182 43 2 22.0123023987
16 30 2 -132.230400085 43 2 22.0123023987
44 29 1 55.1141014099 37 1 56.5508003235
52 10 2 206.289699554 30 2 10.6130981445
76 10 2 -112.366802216 14 2 -34.8992996216
87 4 1 41.3330993652 38 2 7.32550048828
93 19 1 13.7381019592 20 2 -27.9251976013
98 7 2 -127.604602814 34 2 10.223400116
98 7 2 -127.604602814 37 2 -73.8257026672
98 34 2 10.223400116 37 2 -73.8257026672
99 22 2 187.418201447 26 2 185.284301758
This looks good. I need to try it on the cluster, and figure out how to run it in all of the subfolders. Functionalize it.
In [89]:
folders = np.arange(1,3,1)
print(folders)
In [90]:
cced_root = 'cced'
root_path = '../datar/'
for folder in folders:
cced_open = os.path.join(root_path + '/' + str(folder)+'/'+cced_root+str(folder))
print(folder)
print(cced_open)
data = np.genfromtxt(cced_open,dtype=ccedType)
I folded this into the bicorr.py
script.
This produces a bicorr
file in each folder. I would like to combine them into one giant bicorr
file. Try it out.
In [93]:
folders = np.arange(1,3,1)#np.array([1,2])
print(folders)
with open('bicorr_all','w') as outfile:
for folder in folders:
bicorr_file = os.path.join(root_path + '/' + str(folder)+'/'+'bicorr'+str(folder))
print('opening:',bicorr_file)
with open(bicorr_file) as infile:
for line in infile:
outfile.write(line)
In [98]:
print(inspect.getsource(bicorr.generate_bicorr))
In [101]:
bicorr.generate_bicorr(folder_start=1,folder_end=3,root_path='../datar')
Matthew pointed out that I need to implement a pulse height threshold in my analysis. I had not included one, and he ran the dat with a very low pulse height threshold (40 keVee). I need to implement one at 100 keVee (a little conservative vs. 80 keVee but we will try it there first).
I am going to tack this on to the end of this notebook rather than rewriting the notebook. Let's see.
As a reminder, the format of the cced
file is:
evnum
chnum
part
time
totint
height
The pulse height column is already calibration to MeVee, so I can set a threshold on that column to 0.1. I need both neutrons to exceed 0.1 MeVee.
I'm going to run it first without this energy threshold, and then run it with the energy threshold and see how things compare.
For each one I will calculate the number of lines in the resulting file. Use this function:
In [4]:
def file_len(fname):
with open(fname) as f:
for i, l in enumerate(f):
pass
return i + 1
Original: on the master
branch
In [5]:
bicorr.generate_bicorr(folder_start=1,folder_end=2,root_path='../datar')
How can I quantify this file? Maybe look at number of lines in the bicorr
file.
In [6]:
fname = r'../datar/1/bicorr1'
In [6]:
file_len(fname)
Out[6]:
New: on the threshold
branch $E_{thresh} = 100$ keVee
In [8]:
bicorr.generate_bicorr(folder_start=1,folder_end=2,root_path='../datar',Ethresh=0.1)
print(file_len(fname))
Wow... that is a huge drop in the number of events. Could this make our error bars larger? Or will this not matter later since I was looking at metrics calculated in higher energy regions anyway? This is something to keep in mind.
Also: on the threshold
branch $E_{thresh} = 80$ keVee
In [9]:
bicorr.generate_bicorr(folder_start=1,folder_end=2,root_path='../datar',Ethresh=0.08)
print(file_len(fname))
So it looks like the jump from 80 to 100 keVee is not that significant, but including any energy threshold is a big jump from nothing. Maybe measure a few...
In [7]:
fig = plt.figure(figsize=(4,3))
ax = plt.gca()
for Ethresh in np.arange(0.0,0.15,.005):
bicorr.generate_bicorr(folder_start=1,folder_end=2,root_path='../datar',Ethresh=Ethresh)
ax.scatter(Ethresh, file_len(fname))
plt.xlabel('Energy threshold (MeVee)')
plt.ylabel('Number of lines in bicorr1')
plt.show()
This looks good. Deploy.
In [ ]: