Model 1 with FPC mask

Level 1:

EVs:

stimulus appication
stimulus learning
stimulus na
feedback correct
feedback incorrect
feedback na

Contrasts:

stimulus appication>0
stimulus learning>0
stimulus appication>stimulus learning

Level 2:

task001 task1
task001 task2
task001 task1>task2
task001 task2>task1

Level 3:

positive contrast
negative contrast

FPC mask

*Images from randomise (cluster mass with t=2.49 and v=8) are thresholded at .95 and overlaid with unthresholded t-maps.

Prepare stuff


In [1]:
import os
from IPython.display import IFrame
from IPython.display import Image

# This function renders interactive brain images
def render(name,brain_list):
    
    #prepare file paths
    brain_files = []
    for b in brain_list:
        brain_files.append(os.path.join("data",b))
    
    wdata = """
    <!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml" lang="en">
	<head>
    	<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
    
    	<!-- iOS meta tags -->
    	<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no"/>
    	<meta name="apple-mobile-web-app-capable" content="yes">
    	<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
    
    	<link rel="stylesheet" type="text/css" href="../papaya/papaya.css?build=1420" />
    	<script type="text/javascript" src="../papaya/papaya.js?build=1422"></script>
    
    	<title>Papaya Viewer</title>
    
	<script type="text/javascript">
    
    var params = [];
    params["worldSpace"] = true;
    params["atlas"] = "MNI (Nearest Grey Matter)";
    params["images"] = %s;
    
    </script>

	</head>

	<body>
		
		<div class="papaya" data-params="params"></div>
		
	</body>
</html>
    """ % str(brain_files)
    
    fname=name+"index.html"
    with open (fname, 'w') as f: f.write (wdata)

    return IFrame(fname, width=800, height=600)

In [2]:
# variables
l1cope="0"
l2cope="0"
l3cope="0"
def paths():
    sliced_img = os.path.join("data", "img_"+l1cope+"_"+l2cope+"_"+l3cope+"_wb.png")
    wb_img = "WB.nii.gz"
    cluster_corr = "rand_"+l1cope+"_"+l2cope+"_"+l3cope+".nii.gz"
    tstat_img = os.path.join("data", "imgt_"+l1cope+"_"+l2cope+"_"+l3cope+"_wb.png")
    html_cl = l1cope+"_"+l2cope+"_"+l3cope
    html_t = l1cope+"_"+l2cope+"_"+l3cope+"t"
    return sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t

Model results

Rule learning and rule application in the matching task

Rule Learning > Rule Application


In [3]:
l1cope="3"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [4]:
render(html_cl,[wb_img,cluster_corr])


Out[4]:

Rule Application > Rule Learning


In [5]:
l1cope="3"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [6]:
render(html_cl,[wb_img,cluster_corr])


Out[6]:

Rule Learning > Baseline


In [7]:
l1cope="2"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [8]:
render(html_cl,[wb_img,cluster_corr])


Out[8]:

Baseline > Rule Learning


In [9]:
l1cope="2"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [10]:
render(html_cl,[wb_img,cluster_corr])


Out[10]:

Rule Application > Baseline


In [11]:
l1cope="1"
l2cope="1"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [12]:
render(html_cl,[wb_img,cluster_corr])


Out[12]:

Baseline > Rule Application


In [13]:
l1cope="1"
l2cope="1"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()
Image(sliced_img)


Out[13]:

In [14]:
render(html_cl,[wb_img,cluster_corr])


Out[14]:

Rule learning and rule application in the classification task

Rule Learning > Rule Application


In [15]:
l1cope="3"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [16]:
render(html_cl,[wb_img,cluster_corr])


Out[16]:

Rule Learning > Baseline


In [17]:
l1cope="2"
l2cope="2"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [18]:
render(html_cl,[wb_img,cluster_corr])


Out[18]:

Baseline > Rule Learning


In [19]:
l1cope="2"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [20]:
render(html_cl,[wb_img,cluster_corr])


Out[20]:

Rule Application > Baseline


In [21]:
l1cope="1"
l2cope="2"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [22]:
render(html_cl,[wb_img,cluster_corr])


Out[22]:

Baseline > Rule Application


In [23]:
l1cope="1"
l2cope="2"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [24]:
render(html_cl,[wb_img,cluster_corr])


Out[24]:

Rule learning in the matching and classification tasks

Matching > Classification


In [25]:
l1cope="2"
l2cope="3"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [26]:
render(html_cl,[wb_img,cluster_corr])


Out[26]:

Classification > Matching


In [27]:
l1cope="2"
l2cope="3"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [28]:
render(html_cl,[wb_img,cluster_corr])


Out[28]:

Rule application in the matching and classification tasks

Matching > Classification


In [29]:
l1cope="1"
l2cope="3"
l3cope="1"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [30]:
render(html_cl,[wb_img,cluster_corr])


Out[30]:

Classification > Matching


In [31]:
l1cope="1"
l2cope="3"
l3cope="2"
sliced_img,wb_img,cluster_corr,tstat_img,html_cl,html_t = paths()

In [32]:
render(html_cl,[wb_img,cluster_corr])


Out[32]: