In [1]:
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2020, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
# http://numenta.org/licenses/
In [73]:
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
from correlation_experiment import SparseCorrExperiment, plot_duty_cycles, plot_entropies
%matplotlib inline
This notebook shows a number of examples illustrating how correlated the activations of sparse or dense neural networks are when different GSC class inputs are presented. Namely, this notebook will show the Pearson correlations and normalized dot products of the activations of specific layers. The experiments run in this notebook are:
The purpose of these experiments is to get a handle for how decorrelated sparse network activations are, and to thus inform a plan of attack for exploiting sparsity for continuous learning.
Each experiment will produce 4 plots: 2 for Pearson correlation and 2 for the dot product. Each metric is associated with an "off-diag" plot which shows the mean value of the metric for between-class comparisons, and "diag" which shows the value for within-class comparisons.
Each experiment can be modified in several ways by manipulating the following parameters:
Note that it takes a good amount of time to calculate the Pearson and dot product matrices for large networks. Setting shuffled=True will almost double the time it takes to show the results.
The parameters used for initializing and training the models can be found in the "experiments.cfg" file.
In [85]:
config_file = "experiments.cfg"
experiment = SparseCorrExperiment(config_file=config_file)
In [86]:
mod_comp_corrs = experiment.model_comparison()
In [78]:
act_fun_corrs = experiment.act_fn_comparison()
In [79]:
layer_size_corrs = experiment.layer_size_comparison(layer_sizes=[64, 128, 256], compare_models=False)
In [80]:
mod_comp_corrs, sh_mod_comp_corrs = experiment.model_comparison(shuffled=True)
In [ ]:
mod_comp_corrs = experiment.model_comparison(sequential=True)
In [ ]:
# Duty cycles
plot_duty_cycles(experiment)
In [ ]:
# Entropies
plot_entropies(experiment)
In [ ]: