This tutorial will go over the basics of analyzing eggs
, the primary data structure used in quail
. To learn about how an egg is set up, see the egg tutorial.
An egg is made up of (at minimum) the stimuli presented to a subject and the stimuli recalled by the subject. With these, two components we can perform a number of analyses:
If we have a set of features for the stimuli, we can also compute a Memory Fingerprint, which is an estimate of how a subject clusters their recall responses with respect to features of a stimulus (see the fingerprint tutorial for more on this).
Let's get to analyzing some eggs
. First, we'll load in some example data:
In [ ]:
import quail
%matplotlib inline
egg = quail.load_example_data()
This dataset is comprised of 30 subjects, who each performed 8 study/test blocks of 16 words each. Here are some of the presented words:
In [ ]:
egg.get_pres_items().head()
and some of the recalled words:
In [ ]:
egg.get_rec_items().head()
We can start with the simplest analysis - recall accuracy - which is just the proportion of stimuli recalled that were in the encoding lists. To compute accuracy, simply call the analyze
method, with the analysis
key word argument set to accuracy
:
In [ ]:
acc = egg.analyze('accuracy')
acc.get_data().head()
The result is a FriedEgg
data object. The accuracy data can be retrieved using the get_data
method, which returns a multi-index Pandas DataFrame where the first-level index is the subject identifier and the second level index is the list number. By default, note that each list is analyzed separately. However, you can easily return the average over lists using the listgroup
kew word argument:
In [ ]:
accuracy_avg = egg.analyze('accuracy', listgroup=['average']*8)
accuracy_avg.get_data().head()
Now, the result is a single value for each subject representing the average accuracy across the 16 lists. The listgroup
kwarg can also be used to do some fancier groupings, like splitting the data into the first and second half of the experiment:
In [ ]:
accuracy_split = egg.analyze('accuracy', listgroup=['First Half']*4+['Second Half']*4)
accuracy_split.get_data().head()
These analysis results can be passed directly into the plot function like so:
In [ ]:
accuracy_split.plot()
For more details on plotting, see the advanced plotting tutorial. Next, lets take a look at the serial position curve analysis. As stated above the serial position curve (or spc) computes recall accuracy as a function of the encoding position of the stimulus. To use it, use the same analyze
method illustrated above, but set the analysis
kwarg to spc
. Let's also average across lists within subject:
In [ ]:
spc = egg.analyze('spc', listgroup=['average']*8)
spc.get_data().head()
The result is a df where each row is a subject and each column is the encoding position of the word. To plot, simply pass the result of the analysis function to the plot function:
In [ ]:
spc.plot(ylim=[0, 1])
The next analysis we'll take a look at is the probability of first recall, which is the probability that a word will be recalled first as a function of its encoding position. To compute this, call the analyze
method with the analysis
kwarg set to pfr
. Again, we'll average over lists:
In [ ]:
pfr = egg.analyze('pfr', listgroup=['average']*8)
pfr.get_data().head()
This df is set up just like the serial position curve. To plot:
In [ ]:
pfr.plot()
In [ ]:
lagcrp = egg.analyze('lagcrp', listgroup=['average']*8)
lagcrp.get_data().head()
Unlike the previous two analyses, the result of this analysis returns a df where the number of columns are double the length of the lists. To view the results:
In [ ]:
lagcrp.plot()
Another way to evaluate temporal clustering is to measure the temporal distance of each transition made with respect to where on a list the subject could have transitioned. This 'temporal clustering score' is a good summary of how strongly participants are clustering their responses according to temporal proximity during encoding.
In [ ]:
temporal = egg.analyze('temporal', listgroup=['First Half']*4+['Second Half']*4)
temporal.plot(plot_style='violin', ylim=[0,1])
Last but not least is the memory fingerprint analysis. For a detailed treatment of this analysis, see the fingerprint tutorial.
As described in the fingerprint tutorial, the features
data structure is used to estimate how subjects cluster their recall responses with respect to the features of the encoded stimuli. Briefly, these estimates are derived by computing the similarity of neighboring recall words along each feature dimension. For example, if you recall "dog", and then the next word you recall is "cat", your clustering by category score would increase because the two recalled words are in the same category. Similarly, if after you recall "cat" you recall the word "can", your clustering by starting letter score would increase, since both words share the first letter "c". This logic can be extended to any number of feature dimensions.
Here is a glimpse of the features df:
In [ ]:
egg.feature_names
Like the other analyses, computing the memory fingerprint can be done using the analyze
method with the analysis
kwarg set to fingerprint
:
In [ ]:
fingerprint = egg.analyze('fingerprint', listgroup=['average']*8)
fingerprint.get_data().head()
The result of this analysis is a df, where each row is a subject's fingerprint and each column is a feature dimensions. The values represent a subjects tendency to cluster their recall responses along a particular feature dimensions. They are probability values, and thus, greater values indicate more clustering along that feature dimension. To plot, simply pass the result to the plot function:
In [ ]:
order=sorted(egg.feature_names)
fingerprint.plot(order=order, ylim=[0, 1])
This result suggests that subjects in this example dataset tended to cluster their recall responses by category as well as the size (bigger or smaller than a shoebox) of the word. List length and other properties of your experiment can bias these clustering scores. To help with this, we implemented a permutation clustering procedure which shuffles the order of each recall list and recomputes the clustering score with respect to that distribution. Note: this also works with the temporal clustering analysis.
In [ ]:
# warning: this can take a little while. Setting parallel=True will help speed up the permutation computation
# fingerprint = quail.analyze(egg, analysis='fingerprint', listgroup=['average']*8, permute=True, n_perms=100)
# ax = quail.plot(fingerprint, ylim=[0,1.2])