Bayesian Workflow Example (Police Officer's Dilemma)


In [1]:
import arviz as az
import bambi as bmb
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

In [2]:
az.style.use('arviz-darkgrid')

Here we will analyze a dataset from experimental psychology in which a sample of 36 human participants engaged in what is called the shooter task, yielding 3600 responses and reaction times (100 from each subject). The link above gives some more information about the shooter task, but basically it is a sort of crude first-person-shooter video game in which the subject plays the role of a police officer. The subject views a variety of urban scenes, and in each round or "trial" a person or "target" appears on the screen after some random interval. This person is either Black or White (with 50% probability), and they are holding some object that is either a gun or some other object like a phone or wallet (with 50% probability). When a target appears, the subject has a very brief response window -- 0.85 seconds in this particular experiment -- within which to press one of two keyboard buttons indicating a "shoot" or "don't shoot" response. Subjects receive points for correct and timely responses in each trial; subjects' scores are penalized for incorrect reponses (i.e., shooting an unarmed person or failing to shoot an armed person) or if they don't respond within the 0.85 response window. The goal of the task, from the subject's perspective, is to maximize their score.

The typical findings in the shooter task are that (a) subjects are quicker to respond to armed targets than to unarmed targets, but are especially quick toward armed black targets and especially slow toward unarmed black targets, and (b) subjects are more likely to shoot black targets than white targets, whether they are armed or not.

Load and examine data


In [3]:
shooter = pd.read_csv('data/shooter.csv', na_values='.')
shooter.head(10)


Out[3]:
subject target trial race object time response
0 1 w05 19 white nogun 658.0 correct
1 2 b07 19 black gun 573.0 correct
2 3 w05 19 white gun 369.0 correct
3 4 w07 19 white gun 495.0 correct
4 5 w15 19 white nogun 483.0 correct
5 6 w96 19 white nogun 786.0 correct
6 7 w13 19 white nogun 519.0 correct
7 8 w06 19 white nogun 567.0 correct
8 9 b14 19 black gun 672.0 incorrect
9 10 w90 19 white gun 457.0 correct

The design of the experiment is such that the subject, target, and object (i.e., gun vs. no gun) factors are fully crossed: each subject views each target twice, once with a gun and once without a gun.


In [4]:
pd.crosstab(shooter['subject'], [shooter['target'], shooter['object']])


Out[4]:
target b01 b02 b03 b04 b05 ... w95 w96 w97 w98 w99
object gun nogun gun nogun gun nogun gun nogun gun nogun ... gun nogun gun nogun gun nogun gun nogun gun nogun
subject
1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
14 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
16 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
17 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
18 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
19 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
20 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
21 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
22 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
23 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
24 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
25 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
26 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
27 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
28 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
29 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
30 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
31 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
32 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
33 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
34 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
35 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
36 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1

36 rows × 100 columns

The response speeds on each trial are recorded given as reaction times (milliseconds per response), but here we invert them to and multiply by 1000 so that we are analyzing response rates response rates (responses per second). There is no theoretical reason to prefer one of these metrics over the other, but it turns out that response rates tend to have nicer distributional properties than reaction times (i.e., deviate less strongly from the standard Gaussian assumptions), so response rates will be a little more convenient for us by allowing us to use some fairly standard distributional models.


In [5]:
shooter['rate'] = 1000.0/shooter['time']


INFO:numexpr.utils:NumExpr defaulting to 4 threads.

In [6]:
plt.hist(shooter['rate'].dropna());


Fit response rate models

Random subject effects only

Our first model is analogous to how the data from the shooter task are usually analyzed: incorporating all subject-level sources of variability, but ignoring the sampling variability due to the sample of 50 targets. This is a Bayesian generalized linear mixed model (GLMM) with a Normal response and with intercepts and slopes that vary randomly across subjects. Of note here is the C(x, Sum) syntax, which is from the Patsy library that we use to parse formulae. This instructs Bambi to use contrast codes of -1 and +1 for the two levels of each of the fixed factors of race (black vs. white) and object (gun vs. no gun), so that the race and object coefficients can be interpreted as simple effects on average across the levels of the other factor (directly analogous, but not quite equivalent, to the main effects). This is the standard coding used in ANOVA.


In [7]:
subj_model = bmb.Model(shooter, dropna=True)
subj_fitted = subj_model.fit('rate ~ C(race, Sum)*C(object, Sum)',
                             random=['C(race, Sum)*C(object, Sum)|subject'],
                             samples=1000, init=None)


/home/osvaldo/proyectos/00_BM/Bambi/bambi/bambi/models.py:160: UserWarning: Automatically removing 98/3600 rows from the dataset.
  warnings.warn(msg)
Multiprocess sampling (2 chains in 2 jobs)
INFO:pymc3:Multiprocess sampling (2 chains in 2 jobs)
NUTS: [rate_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
INFO:pymc3:NUTS: [rate_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
100.00% [3000/3000 00:28<00:00 Sampling 2 chains, 0 divergences]
The number of effective samples is smaller than 25% for some parameters.
INFO:pymc3:The number of effective samples is smaller than 25% for some parameters.

First let's visualize the default priors that Bambi automatically decided on for each of the parameters. We do this by calling the .plot() method of the Model object.


In [8]:
subj_model.plot();


The priors on the fixed effects seem quite reasonable. Recall that because of the -1 vs +1 contrast coding, the coefficients correspond to half the difference betwen the two levels of each factor. So the priors on the fixed effects essentially say that the black vs. white and gun vs. no gun (and their interaction) response rate differences are very unlikely to be as large as a full response per second. The priors on the standard deviations of the random effects are yoked to the corresponding fixed effect priors (see our technical paper for more info).

Now let's visualize the model estimates. We do this by calling the .plot() method of the MCMCResults object that resulted from our call to Model.fit().


In [9]:
subj_fitted.posterior

m_vars = [str(v) for v in subj_model.backend.model.vars]
plot_vars = []
# is not str(v).endswith(offset)]

In [10]:
az.plot_trace(subj_fitted, compact=True);


Each distribution in the plots above has 4 densities because we used 4 MCMC chains, so we are viewing the results of all 4 chains prior to their aggregation. The main message from the plot above is that the chains all seem to have converged well and the resulting posterior distributions all look quite reasonable. It's a bit easier to digest all this information in a concise, tabular form, which we can get by calling the .summary() method of the MCMCResults object.


In [11]:
# by default the variables plotted by subj_model.plot() are different than those from:
az.summary(subj_fitted)


Out[11]:
mean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept[0] 1.708 0.014 1.680 1.733 0.001 0.000 431.0 431.0 432.0 736.0 1.01
C(race, Sum)[0] -0.001 0.004 -0.009 0.007 0.000 0.000 2629.0 949.0 2623.0 1536.0 1.00
C(object, Sum)[0] 0.093 0.006 0.082 0.105 0.000 0.000 1310.0 1306.0 1304.0 1255.0 1.00
C(race, Sum):C(object, Sum)[0] 0.023 0.004 0.015 0.031 0.000 0.000 3674.0 3629.0 3712.0 1418.0 1.00
1|subject_offset[0] -0.033 0.318 -0.617 0.565 0.009 0.007 1271.0 1184.0 1269.0 1553.0 1.00
... ... ... ... ... ... ... ... ... ... ... ...
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[32] 0.000 0.006 -0.012 0.012 0.000 0.000 2749.0 1052.0 2899.0 1477.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[33] -0.000 0.006 -0.012 0.013 0.000 0.000 2596.0 1409.0 2748.0 1702.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[34] 0.000 0.006 -0.011 0.014 0.000 0.000 2413.0 1170.0 2865.0 1709.0 1.01
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[35] -0.001 0.007 -0.016 0.011 0.000 0.000 2759.0 1287.0 2959.0 1852.0 1.00
rate_sd 0.240 0.003 0.234 0.245 0.000 0.000 3892.0 3875.0 3916.0 1447.0 1.01

297 rows × 11 columns

The take-home message from the analysis seems to be that we do find evidence for the usual finding that subjects are especially quick to respond (presumably with a shoot response) to armed black targets and especially slow to respond to unarmed black targets (while unarmed white targets receive "don't shoot" responses with less hesitation). We see this in the fact that the marginal posterior for the C(race, Sum)[S.black]:C(object, Sum)[S.gun] interaction coefficient is concentrated strongly away from 0.

Random stimulus effects

A major flaw in the analysis above is that random stimulus effects are ignored. The model does include random effects for subjects, reflecting the fact that the subjects we observed are but a sample from the broader population of subjects we are interested in and that potentially could have appeared in our study. But the targets we observed -- the 50 photographs of white and black men that subjets responded to -- are also but a sample from the broader theoretical population of targets we are interested in talking about, and that we could have just as easily and justifiably used as the experimental stimuli in the study. Since the stimuli comprise a random sample, they are subject to sampling variability, and this sampling variability should be accounted in the analysis by including random stimulus effects. For some more information on this, see here, particularly pages 62-63.

To account for this, we let the intercept and object slope vary randomly across targets. Random object slopes across targets are possible because, if you recall, the design of the study was such that each target gets viewed twice by each subject, once with a gun and once without a gun. However, because each target is always either white or black, it's not possible to add random slopes for the race factor or the interaction.


In [12]:
stim_model = bmb.Model(shooter, dropna=True)
stim_fitted = stim_model.fit('rate ~ C(race, Sum)*C(object, Sum)',
                             random=['C(race, Sum)*C(object, Sum)|subject',
                                     'C(object, Sum)|target'],
                             samples=1000)


/home/osvaldo/proyectos/00_BM/Bambi/bambi/bambi/models.py:160: UserWarning: Automatically removing 98/3600 rows from the dataset.
  warnings.warn(msg)
Auto-assigning NUTS sampler...
INFO:pymc3:Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
INFO:pymc3:Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
INFO:pymc3:Multiprocess sampling (2 chains in 2 jobs)
NUTS: [rate_sd, C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sd, 1|target_offset, 1|target_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
INFO:pymc3:NUTS: [rate_sd, C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sd, 1|target_offset, 1|target_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
100.00% [3000/3000 00:47<00:00 Sampling 2 chains, 0 divergences]
The estimated number of effective samples is smaller than 200 for some parameters.
ERROR:pymc3:The estimated number of effective samples is smaller than 200 for some parameters.

Now let's look at the results...


In [13]:
az.plot_trace(stim_fitted, var_names=stim_model.term_names, compact=True);



In [14]:
az.summary(stim_fitted, var_names=stim_model.term_names)


Out[14]:
mean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept[0] 1.705 0.019 1.665 1.737 0.002 0.001 124.0 124.0 125.0 366.0 1.01
C(race, Sum)[0] -0.001 0.014 -0.025 0.025 0.001 0.001 163.0 163.0 167.0 200.0 1.02
C(object, Sum)[0] 0.094 0.015 0.067 0.121 0.001 0.001 224.0 224.0 232.0 560.0 1.02
C(race, Sum):C(object, Sum)[0] 0.024 0.015 -0.003 0.051 0.001 0.001 189.0 189.0 189.0 390.0 1.01
1|subject[0] -0.005 0.023 -0.046 0.042 0.001 0.001 392.0 392.0 391.0 894.0 1.00
... ... ... ... ... ... ... ... ... ... ... ...
C(object, Sum)[S.gun]|target[45] -0.176 0.030 -0.232 -0.123 0.001 0.001 498.0 474.0 493.0 991.0 1.01
C(object, Sum)[S.gun]|target[46] 0.078 0.032 0.019 0.139 0.001 0.001 607.0 607.0 601.0 1180.0 1.01
C(object, Sum)[S.gun]|target[47] 0.006 0.030 -0.052 0.063 0.001 0.001 525.0 525.0 516.0 949.0 1.01
C(object, Sum)[S.gun]|target[48] 0.086 0.030 0.027 0.140 0.001 0.001 464.0 464.0 466.0 802.0 1.01
C(object, Sum)[S.gun]|target[49] -0.020 0.031 -0.076 0.038 0.001 0.001 583.0 583.0 585.0 1142.0 1.01

248 rows × 11 columns

There are two interesting things to note here. The first is that the key interaction effect, C(race, Sum)[S.black]:C(object, Sum)[S.gun] is much less clear now. The marginal posterior is still mostly concentrated away from 0, but there's certainly a nontrivial part that overlaps with 0; 3.2% of the distribution, to be exact.


In [15]:
(stim_fitted.posterior['C(race, Sum):C(object, Sum)'] < 0).mean()


Out[15]:
<xarray.DataArray 'C(race, Sum):C(object, Sum)' ()>
array(0.048)

The second interesting thing is that the two new variance components in the model, those associated with the random stimulus effects, are actually rather large. This actually largely explains the first fact above, since if these where estimated to be close to 0 anyway, the model estimates wouldn't be much different than they were in the subj_model. It makes sense that there is a strong tendency for different targets to elicit difference reaction times on average, which leads to a large estimate of 1|target_sd. Less obviously, the large estimate of C(object, Sum)[S.gun]|target_sd (targets tend to vary a lot in their response rate differences when they have a gun vs. some other object) also makes sense, because in this experiment, different targets were pictured with different non-gun objects. Some of these objects, such as a bright red can of Coca-Cola, are not easily confused with a gun, so subjects are able to quickly decide on the correct response. Other objects, such as a black cell phone, are possibly easier to confuse with a gun, so subjects take longer to decide on the correct response when confronted with this object. Since each target is yoked to a particular non-gun object, there is good reason to expect large target-to-target variability in the object effect, which is indeed what we see in the model estimates.

Fit response models

Here we seek evidence of the second traditional finding, that subjects are more likely to response 'shoot' toward black targets than toward white targets, regardless of whether they are armed or not. Currently the dataset just records whether the given response was correct or not, so first we transformed this into whether the response was 'shoot' or 'dontshoot'.


In [16]:
shooter['shoot_or_not'] = shooter['response'].astype(str)

# armed targets
new_vals = {'correct': 'shoot', 'incorrect':'dontshoot', 'timeout': np.nan}
shooter['shoot_or_not'][shooter['object']=='gun'] = \
    shooter['response'][shooter['object']=='gun'].astype(str).replace(new_vals)
# unarmed targets
new_vals = {'correct': 'dontshoot', 'incorrect':'shoot', 'timeout': np.nan}
shooter['shoot_or_not'][shooter['object']=='nogun'] = \
    shooter['response'][shooter['object']=='nogun'].astype(str).replace(new_vals)
    
# view result
shooter.head(20)


/home/osvaldo/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  
/home/osvaldo/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  # Remove the CWD from sys.path while we load stuff.
Out[16]:
subject target trial race object time response rate shoot_or_not
0 1 w05 19 white nogun 658.0 correct 1.519757 dontshoot
1 2 b07 19 black gun 573.0 correct 1.745201 shoot
2 3 w05 19 white gun 369.0 correct 2.710027 shoot
3 4 w07 19 white gun 495.0 correct 2.020202 shoot
4 5 w15 19 white nogun 483.0 correct 2.070393 dontshoot
5 6 w96 19 white nogun 786.0 correct 1.272265 dontshoot
6 7 w13 19 white nogun 519.0 correct 1.926782 dontshoot
7 8 w06 19 white nogun 567.0 correct 1.763668 dontshoot
8 9 b14 19 black gun 672.0 incorrect 1.488095 dontshoot
9 10 w90 19 white gun 457.0 correct 2.188184 shoot
10 11 w91 19 white nogun 599.0 correct 1.669449 dontshoot
11 12 b17 19 black nogun 769.0 correct 1.300390 dontshoot
12 13 b04 19 black nogun 600.0 correct 1.666667 dontshoot
13 14 w17 19 white nogun 653.0 correct 1.531394 dontshoot
14 15 b93 19 black gun 468.0 correct 2.136752 shoot
15 16 w96 19 white gun 546.0 correct 1.831502 shoot
16 17 w91 19 white gun 591.0 incorrect 1.692047 dontshoot
17 18 b95 19 black gun NaN timeout NaN NaN
18 19 b09 19 black gun 656.0 correct 1.524390 shoot
19 20 b02 19 black gun 617.0 correct 1.620746 shoot

Let's skip straight to the correct model that includes random stimulus effects. This looks quite similiar to the stim_model from above except that we change the response to the new shoot_or_not variable -- notice the [shoot] syntax indicating that we wish to model the prbability that shoot_or_not=='shoot', not shoot_or_not=='dontshoot' -- and then change to family='bernoulli' to indicate a mixed effects logistic regression.


In [17]:
stim_response_model = bmb.Model(shooter.dropna())
stim_response_fitted = stim_response_model.fit(
    'shoot_or_not[shoot] ~ C(race, Sum)*C(object, Sum)',
    random=['C(race, Sum)*C(object, Sum)|subject',
            'C(object, Sum)|target'],
    samples=1500, family='bernoulli')


/home/osvaldo/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py:2963: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  self[k1] = value[k2]
/home/osvaldo/proyectos/00_BM/Bambi/bambi/bambi/models.py:269: UserWarning: Modeling the probability that shoot_or_not=='shoot'
  self.y.name, str(self.clean_data[self.y.name].iloc[event])
Auto-assigning NUTS sampler...
INFO:pymc3:Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
INFO:pymc3:Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (2 chains in 2 jobs)
INFO:pymc3:Multiprocess sampling (2 chains in 2 jobs)
NUTS: [C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sd, 1|target_offset, 1|target_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
INFO:pymc3:NUTS: [C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sd, 1|target_offset, 1|target_sd, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sd, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sd, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sd, 1|subject_offset, 1|subject_sd, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]
100.00% [4000/4000 00:40<00:00 Sampling 2 chains, 5 divergences]
There were 2 divergences after tuning. Increase `target_accept` or reparameterize.
ERROR:pymc3:There were 2 divergences after tuning. Increase `target_accept` or reparameterize.
There were 3 divergences after tuning. Increase `target_accept` or reparameterize.
ERROR:pymc3:There were 3 divergences after tuning. Increase `target_accept` or reparameterize.
The number of effective samples is smaller than 25% for some parameters.
INFO:pymc3:The number of effective samples is smaller than 25% for some parameters.

Show the trace plot


In [18]:
az.plot_trace(stim_response_fitted, compact=True);


Looks pretty good! Now for the more concise summary.


In [19]:
az.summary(stim_response_fitted)


Out[19]:
mean sd hpd_3% hpd_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept[0] -0.022 0.149 -0.300 0.246 0.004 0.003 1265.0 1265.0 1262.0 1462.0 1.0
C(race, Sum)[0] 0.219 0.141 -0.044 0.490 0.004 0.003 1507.0 1339.0 1516.0 1516.0 1.0
C(object, Sum)[0] 4.168 0.251 3.711 4.627 0.009 0.007 709.0 704.0 724.0 889.0 1.0
C(race, Sum):C(object, Sum)[0] 0.203 0.173 -0.107 0.526 0.005 0.004 1048.0 773.0 1104.0 1151.0 1.0
1|subject_offset[0] -0.005 0.953 -1.874 1.710 0.019 0.019 2501.0 1318.0 2498.0 1802.0 1.0
... ... ... ... ... ... ... ... ... ... ... ...
C(object, Sum)[S.gun]|target[45] 0.362 0.607 -0.791 1.471 0.013 0.013 2110.0 1076.0 2197.0 1705.0 1.0
C(object, Sum)[S.gun]|target[46] 0.043 0.569 -1.073 1.046 0.013 0.012 1936.0 1209.0 2003.0 1562.0 1.0
C(object, Sum)[S.gun]|target[47] 0.298 0.581 -0.814 1.363 0.012 0.011 2181.0 1427.0 2245.0 1661.0 1.0
C(object, Sum)[S.gun]|target[48] 0.355 0.600 -0.695 1.544 0.013 0.011 2300.0 1383.0 2360.0 1727.0 1.0
C(object, Sum)[S.gun]|target[49] 0.036 0.563 -0.993 1.106 0.012 0.011 2115.0 1266.0 2153.0 1708.0 1.0

498 rows × 11 columns

There is some slight evidence here for the hypothesis that subjects are more likely to shoot the black targets, regardless of whether they are armed or not, but the evidence is a little weak. The marginal posterior for the C(race, Sum) coefficient is mostly concentrated away from 0, but it overlaps even more in this case with 0 than did the key interaction effect in the previous model.


In [20]:
(stim_response_fitted.posterior['C(race, Sum)'] < 0).mean()


Out[20]:
<xarray.DataArray 'C(race, Sum)' ()>
array(0.05933333)