2-Room Spatial Navigation Analyses

This document contains a demonstration of how to analyse and visualize the 2-Room Spatial Navigation data.

Note that this contains parsing and analysis code, and it is a very flexible format. It is NOT recommended that you use the cogrecon.core.data_flexing_spatial_navigation package for the 2-Room version of the Spatial Navigation Task. There are several critical differences between the tasks which would result in incorrect analysis using that library. Instead, we'll just do it all in this notebook. One exception to this recommendation is when searching for files.

First, we'll list some key parameters of our task - namely, the location of the data, the names of the items which were used, and the minimum number of expected trials.


In [ ]:
data_path = r'Z:\Kelsey\2017 Summer RetLu\Virtual_Navigation_Task\v5_2\NavigationTask_Data\Logged_Data'
study_labels = ['PurseCube', 'CrownCube', 'BasketballCube', 'BootCube', 'CloverCube', 'GuitarCube', 'HammerCube', 'LemonCube', 'IceCubeCube', 'BottleCube']
locations = [[8, -8], [-2, -23], [8, -38], [-14, -13], [15, -18], [-14, 7], [-14, 27], [-6, 18], [8, 22], [11, 5]]
correct_locations = {l: p for l, p in zip(study_labels, locations)}
min_num_trials = 4
iposition_directory = './saved_data/2-room-iposition'

This next block of code finds and sorts the data files into "individuals".


In [ ]:
from cogrecon.core.data_flexing.spatial_navigation.spatial_navigation_parser import catalog_files
import os

files = []
for walk_root, walk_dirs, walk_files in os.walk(data_path):
    for f in walk_files:
        files.append(os.path.join(walk_root, f))
individuals, excluded, non_matching = catalog_files(files, min_num_trials)

print('{0} individuals found.'.format(len(individuals)))

Next, the summary files can be read individually and the results aggrigated into the 'test_results' object and save the data to the iPosition data format.


In [ ]:
import numpy as np
import logging
import os

if not os.path.exists(iposition_directory):
    os.makedirs(iposition_directory)

count = 1
num_items = len(study_labels)
test_results = {}
for individual in individuals:
    individual_result = []
    logging.info("Parsing Individual %s (%d/%d)." % (individual.subject_id, count, len(individuals)))
    count += 1
    trial_count = 1
    out_lines = []
    for trial in individual.trials:
        item_locations = {l: [0.0, 0.0] for l in study_labels}
        items_found = {l: False for l in study_labels}
        if trial.test_vr is None:
            continue
        with open(trial.test_vr, 'rb') as fp:
            lines = fp.readlines()
            lines.reverse()
            for line in lines:
                decoded_line = line.decode('ascii')
                if decoded_line.startswith('Object_Identity_Set'):
                    split_line = decoded_line.split(',')
                    name, location_string = decoded_line[20:].strip().split(' : ')
                    x, y, z = [float(a) for a in location_string[1:-1].split(',')]
                    if not items_found[name]:
                        items_found[name] = True
                        item_locations[name] = [x, z]
        item_location_list = list(np.array([item_locations[l] for l in study_labels]).flatten())
        out_lines.append('\t'.join([str(a) for a in item_location_list]))
        individual_result.append(item_locations)
    with open(os.path.join(iposition_directory, '{0}position_data_coordinates.txt'.format(individual.subject_id)), 'w') as fp:
        for line in out_lines:
            fp.write(line + '\n')
    test_results[individual.subject_id] = individual_result

# Save actual_coordinates.txt
item_location_list = list(np.array(locations).flatten())
out_line = '\t'.join([str(a) for a in item_location_list])
with open(os.path.join(iposition_directory, 'actual_coordinates.txt'.format(individual.subject_id)), 'w') as fp:
    for _ in range(0, min_num_trials):
        fp.write(out_line + '\n')

Finally, we'll get the iPosition output for the converted files.


In [ ]:
from cogrecon.core.batch_pipeline import batch_pipeline
import datetime
import logging

batch_pipeline(iposition_directory,
               datetime.datetime.now().strftime("Holodeck 2-Room Spatial Navigation - %Y-%m-%d_%H-%M-%S.csv"), 
               trial_by_trial_accuracy=False, collapse_trials=False, actual_coordinate_prefixes=False)

Visualizing a Participant

Next, we can visualize an individual participant.


In [ ]:
from cogrecon.core.data_structures import ParticipantData, AnalysisConfiguration
from cogrecon.core.full_pipeline import full_pipeline
import os

subid = '135'

full_pipeline(ParticipantData.load_from_file(os.path.join(iposition_directory, 'actual_coordinates.txt'), 
                                             os.path.join(iposition_directory, '{0}position_data_coordinates.txt'.format(subid)), 
                                             None), 
              AnalysisConfiguration(trial_by_trial_accuracy=False), 
              visualize=True)

Context Boundary Analysis

Next, we'll look at whether or not context boundary effects were present in the data.


In [ ]:
triples_labels = ["red->blue", "blue->red"]
across_triples = [('IceCubeCube', 'PurseCube'), ('BootCube', 'GuitarCube')]
within_triples = [('PurseCube', 'BasketballCube'), ('GuitarCube', 'HammerCube')]

The Context Boundary Effect (CBE) is calculated by taking the average normalized distance between within context and across context pairs in the specified triples.


In [ ]:
import scipy.spatial.distance as distance

cbe_results = {}

for sid in test_results:
    subject_results = []
    for trial in test_results[sid]:
        dists = []
        for triple in across_triples:
            dist = distance.euclidean(trial[triple[0]], trial[triple[1]])
            actual_dist = distance.euclidean(correct_locations[triple[0]], correct_locations[triple[1]])
            dists.append(dist/actual_dist)
        average_across = np.mean(dists)
        dists = []
        for triple in within_triples:
            dist = distance.euclidean(trial[triple[0]], trial[triple[1]])
            actual_dist = distance.euclidean(correct_locations[triple[0]], correct_locations[triple[1]])
            dists.append(dist/actual_dist)
        average_within = np.mean(dists)
        cbe = average_across - average_within
        subject_results.append(cbe)
    cbe_results[sid] = subject_results

This data can be quickly plotted to get means and standard error for each trial as well as for collapsed across trials.


In [ ]:
import matplotlib.pyplot as plt

data = [cbe_results[k] for k in cbe_results]

means = np.mean(data, axis=0)
stds = np.std(data, axis=0)
stes = [s/np.sqrt(len(data)) for s in stds]

plt.figure()
plt.bar(range(0, len(means)), means, yerr=stes)
plt.figure()
plt.bar([0], np.mean(data), yerr=np.std(data)/np.sqrt(len(data)))
plt.show()

We can save/label the data in a DataFrame and then save it to file.


In [ ]:
import pandas

df = pandas.DataFrame(cbe_results).transpose()
df.columns = ['Trial 1', 'Trial 2', 'Trial 3', 'Trial 4']

In [ ]:
df

In [ ]:
df.to_csv('context_boundary_effect.csv', index=False)

Visualize a Path

Finally, we may want to visualize an exploration path during the task. This can be done using the spatial_navigation_2room visualizer. It can and will take a LONG TIME to run because of the amount of data involved.


In [ ]:
from cogrecon.core.visualization.vis_spatial_navigation_2room import visualize
import os.path
from matplotlib import rc
rc('animation', html='html5')

%matplotlib inline

In [ ]:
sub_directory = os.path.join(data_path, '2RoomTestAnonymous', '124')
raw_filepath = os.path.join(sub_directory, 'RawLog_Sub124_Trial1_13_15_57_30-05-2017.csv')
summary_filepath = os.path.join(sub_directory, 'SummaryLog_Sub124_Trial1_13_15_57_30-05-2017.csv')

In [ ]:
%%capture
anim = visualize(raw_filepath, summary_filepath)

In [ ]:
anim

In [ ]: