Licensed under the Apache License, Version 2.0 (the "License");


In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Interaction Between Neurons - Feature Visualization

This notebook uses Lucid to reproduce some of the results in Feature Visualization.

This notebook doesn't introduce the abstractions behind lucid; you may wish to also read the .

Note: The easiest way to use this tutorial is as a colab notebook, which allows you to dive in with no setup. We recommend you enable a free GPU by going:

Runtime   →   Change runtime type   →   Hardware Accelerator: GPU

Install, Import, Load Model


In [0]:
# Install Lucid

!pip install --quiet lucid==0.0.5
#!pip install --quiet --upgrade-strategy=only-if-needed git+https://github.com/tensorflow/lucid.git

In [0]:
# Imports

import numpy as np
import scipy.ndimage as nd
import tensorflow as tf

import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform

In [0]:
# Let's import a model from the Lucid modelzoo!

model = models.InceptionV1()
model.load_graphdef()

Combining Objectives


In [0]:
neuron1 = ('mixed4b_pre_relu', 111)     # large fluffy
# neuron1 = ('mixed3a_pre_relu', 139)   # pointilist
# neuron1 = ('mixed3b_pre_relu',  81)   # brush trokes
# neuron1 = ('mixed4a_pre_relu',  97)   # wavy
# neuron1 = ('mixed4a_pre_relu',  41)   # frames
# neuron1 = ('mixed4a_pre_relu', 479)   # B/W

neuron2 = ('mixed4a_pre_relu', 476)     # art
# neuron2 = ('mixed4b_pre_relu', 360)   # lattices
# neuron2 = ('mixed4b_pre_relu', 482)   # arcs
# neuron2 = ('mixed4c_pre_relu', 440)   # small fluffy
# neuron2 = ('mixed4d_pre_relu', 479)   # bird beaks
# neuron2 = ('mixed4e_pre_relu', 718)   # shoulders

In [8]:
C = lambda neuron: objectives.channel(*neuron)

_ = render.render_vis(model, C(neuron1))
_ = render.render_vis(model, C(neuron2))
_ = render.render_vis(model, C(neuron1) + C(neuron2))


512 791.29175
512 1146.8491
512 1272.907

Random Directions

Unfortunately, constraints on ImageNet mean we can't provide an easy way for you to reproduce the dataset examples. However, we can reproduce the random directions, although since they're random, you'll get a different result each time (and they won't match the ones in the article).


In [0]:
obj = objectives.direction("mixed4d_pre_relu", np.random.randn(528))
_ = render.render_vis(model, obj)


512 61.127705

Aligned Interpolation

We hope to explore and explain this trick in more detail in an upcoming article.


In [0]:
def interpolate_param_f():
  unique = param.fft_image((6, 128, 128, 3))
  shared = [
    param.lowres_tensor((6, 128, 128, 3), (1, 128//2, 128//2, 3)),
    param.lowres_tensor((6, 128, 128, 3), (1, 128//4, 128//4, 3)),
    param.lowres_tensor((6, 128, 128, 3), (1, 128//8, 128//8, 3)),
    param.lowres_tensor((6, 128, 128, 3), (2, 128//8, 128//8, 3)),
    param.lowres_tensor((6, 128, 128, 3), (1, 128//16, 128//16, 3)),
    param.lowres_tensor((6, 128, 128, 3), (2, 128//16, 128//16, 3)),
  ]
  return param.to_valid_rgb(unique + sum(shared), decorrelate=True)

obj = objectives.channel_interpolate("mixed4a_pre_relu", 476, "mixed4a_pre_relu", 460)

_ = render.render_vis(model, obj, interpolate_param_f)


512 7413.614