LSUN Challenge

This document describes how to setup and run the python code for the LSUN saliency evaluation

Setup

With your favorite python package management tool, install the needed libraries (here shown for pip):

pip install numpy scipy theano Cython natsort dill hdf5storage
pip install git+https://github.com/matthias-k/optpy
pip install git+https://github.com/matthias-k/pysaliency

If you want to use the SALICON dataset, you also need to install the SALICON API.

Usage

start by importing pysaliency:

import pysaliency

you probably also want to load the LSUN datasets:

dataset_location = 'datasets' # where to cache datasets
stimuli_salicon_train, fixations_salicon_train = pysaliency.get_SALICON_train(location=dataset_location)
stimuli_salicon_val, fixations_salicon_val = pysaliency.get_SALICON_val(location=dataset_location)
stimuli_salicon_test = pysaliency.get_SALICON_test(location=dataset_location)

stimuli_isun_train, stimuli_isun_val, stimuli_isun_test, fixations_isun_train, fixations_isun_val = pysaliency.get_iSUN(location=dataset_location)

In [4]:
# TODO: Add ModelFromDirectory for log densities
# TODO: Change defaults for saliency map convertor (at least in LSUN subclass)
# TODO: Write fit functions optimize_for_information_gain(model, stimuli, fixations)

Import your saliency model into pysaliency

If you did not develop your model in the pysaliency framework, you have to import the generated saliencymaps or log-densities into pysaliency. If you have the saliency maps saved to an directory with names corresponding to the stimuli filenames, use pysaliency.SaliencyMapModelFromDirectory. You can save your saliency maps as png, jpg, tiff, mat or npy files.


In [ ]:
my_model = pysaliency.SaliencyMapModelFromDirectory(stimuli_salicon_train, "my_model/saliency_maps/SALICON_TRAIN")

If you have an LSUN submission file prepared, you can load it with pysaliency.SaliencyMapModelFromDirectory:


In [ ]:
my_model = pysaliency.SaliencyMapModelFromFile(stimuli_salicon_train, "my_model/salicon_train.mat")

Evaluate your model

Evaluating your model with pysaliency is fairly easy. In general, the evaluation functions take the stimuli and fixations to evaluate on, and maybe some additional configuration parameters. The following metrics are used in the LSUN saliency challenge (additionaly, the information gain metric is used, see below):


In [ ]:
my_model.AUC(stimuli_salicon_train, fixations_salicon_train, nonfixations='uniform')
my_model.AUC(stimuli_salicon_train, fixations_salicon_train, nonfixations='shuffled')

Optimize your model for information gain

If you wish to hand in a probabilistic model, you might wish to optimize the model for the nonlinearity and centerbiases of the datasets. Otherwise we will optimize all saliency map models for information gain using a subset of the iSUN dataset using the following code. Feel free to adapt it to your needs (for example, use more images for fitting).


In [ ]:
my_probabilistic_model = pysaliency.SaliencyMapConvertor(my_model, ...)
fit_stimuli, fit_fixations = pysaliency.create_subset(stimuli_isun_train, fixations_isun_train, range(0, 500))
my_probabilistic_model = pysaliency.optimize_for_information_gain
    my_model, fit_stimuli, fit_fixations,
    num_nonlinearity=20,
    num_centerbias=12,
    optimize=[
        'nonlinearity',
        'centerbias',
        'alpha',
        #'blurradius', # we do not optimize the bluring.
    ])

hand in your model


In [ ]: