Imagenet Processing in parallel


In [1]:
%matplotlib inline
import importlib
import utils2; importlib.reload(utils2)
from utils2 import *


Using TensorFlow backend.
/home/roebius/pj/p3/lib/python3.5/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)

In [2]:
from bcolz_array_iterator import BcolzArrayIterator
from tqdm import tqdm

In [3]:
limit_mem()

This is where our full dataset lives. It's slow spinning discs, but lots of room!

NB: We can easily switch to and from using a sample. We'll use a sample for everything, except the final complete processing (which we'll use fast/expensive compute for, and time on the sample so we know how long it will take).


In [4]:
# path = '/data/jhoward/imagenet/full/'
# path = '/data/jhoward/imagenet/sample/'
path = 'data/imagenet/'  # path to the Imagenet subset that was used in Part 1

This is on a RAID 1 SSD for fast access, so good for resized images and feature arrays


In [5]:
# dpath = '/data/jhoward/fast/imagenet/full/'
# dpath = '/data/jhoward/fast/imagenet/sample/'
# %mkdir {dpath}
dpath = 'data/imagenet/'

Note that either way, AWS isn't going to be a great place for doing this kind of analysis - putting a model into production will cost at minimum $600/month for a P2 instance. For that price you can buy a GTX 1080 card, which has double the performance of the AWS P2 card! And you can set up your slow full data RAID 5 array and your fast preprocessed data RAID 1 array just as you like it. Since you'll want your own servers for production, you may as well use them for training, and benefit from the greater speed, lower cost, and greater control of storage resources.

You can put your server inside a colo facility for very little money, paying just for the network and power. (Cloud providers aren't even allowed to provide GTX 1080's!)

There's little need for distributed computing systems for the vast majority of training and production needs in deep learning.

Get word vectors

First we need to grab some word vectors, to use as our dependent variable for the image model (so that the image vectors and word vectors will be in the same space). After loading the word vectors, we'll make sure that the names of the wordnet/imagenet are in the word list.

  • Be careful not to just follow paper's approach - e.g. here word2vec better than custom wikipedia vectors. word2vec has multi-word tokens like 'golden retriever'
  • Take evaluations shown in papers with a grain of salt, and do your own tests on important bits. E.g. DeVISE (because it's an older paper) used an old and inaccurate image model, and poor word vectors, so recent papers that compare to it aren't so relevent

In [6]:
from gensim.models import word2vec
# w2v_path='/data/jhoward/datasets/nlp/GoogleNews-vectors-negative300'
w2v_path='data/GoogleNews-vectors/GoogleNews-vectors-negative300'

In [7]:
model = word2vec.KeyedVectors.load_word2vec_format(w2v_path+'.bin', binary=True)
model.save_word2vec_format(w2v_path+'.txt', binary=False)

In [8]:
lines = open(w2v_path+'.txt').readlines()

In [9]:
def parse_w2v(l):
    i=l.index(' ')
    return l[:i], np.fromstring(l[i+1:-2], 'float32', sep=' ')

In [10]:
w2v_list = list(map(parse_w2v, lines[1:]))

In [11]:
pickle.dump(w2v_list, open(path+'w2vl.pkl', 'wb'))

In [12]:
w2v_list = pickle.load(open(path+'w2vl.pkl', 'rb'))

We save the processed file so we can access it quickly in the future. It's a good idea to save any intermediate results that take a while to recreate, so you can use them both in production and prototyping.


In [13]:
w2v_dict = dict(w2v_list)
words,vectors = zip(*w2v_list)

Always test your inputs! If you're not sure what to look for, try to come up with some kind of reasonableness test.


In [14]:
np.corrcoef(w2v_dict['jeremy'], w2v_dict['Jeremy'])


Out[14]:
array([[ 1.        ,  0.16497618],
       [ 0.16497618,  1.        ]])

In [15]:
np.corrcoef(w2v_dict['banana'], w2v_dict['Jeremy'])


Out[15]:
array([[ 1.        ,  0.01871472],
       [ 0.01871472,  1.        ]])

In [16]:
lc_w2v = {w.lower(): w2v_dict[w] for w in reversed(words)}

In [17]:
lc_w2v['peacock']


Out[17]:
array([-0.109863  , -0.033203  , -0.11377   ,  0.064453  ,  0.17089801,
       -0.12695301,  0.061035  ,  0.026001  ,  0.078125  ,  0.035645  ,
       -0.030396  , -0.114258  , -0.046875  , -0.026855  , -0.31835899,
       -0.137695  , -0.111328  ,  0.088867  , -0.031006  ,  0.064941  ,
       ..., -0.13867199,  0.013367  ,  0.080078  ,  0.091309  ,
        0.008789  ,  0.085449  ,  0.043701  , -0.095215  ,  0.064941  ,
        0.04541   , -0.061523  , -0.072266  , -0.06543   , -0.060059  ,
       -0.050781  , -0.31835899,  0.107422  , -0.077637  ,  0.17285199,
        0.15429001], dtype=float32)

We're going to map word vectors for each of:

  • The 1000 categories in the Imagenet competition
  • The 82,000 nouns in Wordnet

In [18]:
fpath = get_file('imagenet_class_index.json', 
#                  'http://www.platform.ai/models/imagenet_class_index.json', 
                 'http://files.fast.ai/models/imagenet_class_index.json',
                 cache_subdir='models')
class_dict = json.load(open(fpath))
nclass = len(class_dict); nclass


Out[18]:
1000

In [19]:
class_dict['84']  # 84 should be peacock's class number


Out[19]:
['n01806143', 'peacock']

In [20]:
# - Creating the classids.txt file from Wordnet 
# - based on kelvin's suggestion @ http://forums.fast.ai/t/lesson-10-discussion/1807/20 (Thx!)
# - 1. import nltk, 2. execute nltk.download which should open a download window, then
# - 3. using the downloader select "corpora" and "wordnet"
# - 4. click download
import nltk
# nltk.download()
# > d
# > wordnet
# > q
wordnet_nouns = list(nltk.corpus.wordnet.all_synsets(pos='n'))
with open(os.path.join(path, 'classids.txt'), 'w') as f:
    f.writelines(['n{:08d} {}\n'.format(n.offset(), n.name().split('.')[0]) for n in wordnet_nouns])

classids_1k = dict(class_dict.values())
classid_lines = open(path+'classids.txt', 'r').readlines()
classids = dict(l.strip().split(' ') for l in classid_lines)
len(classids)


Out[20]:
82115

In [21]:
# - Original cell, could use it if you have the original classids.txt file
# classids_1k = dict(class_dict.values())
# classid_lines = open(path+'../classids.txt', 'r').readlines()
# classids = dict(l.strip().split(' ') for l in classid_lines)
# len(classids)

In [22]:
# - The file classids.txt could alternatively be obtained from http://image-net.org/archive/words.txt by
# - saving the page as 'classids.txt'. It will be a tab-separated file.
# - In this case use the following code instead of the previous cell
#
# classids_1k = dict(class_dict.values())
# classid_lines = open(path+'classids.txt', 'r').readlines()
# # classids = dict(l.strip().split(' ') for l in classid_lines)
# classids = dict(l.strip().split('\t') for l in classid_lines)  # the separator is the tab char
# len(classids)

In [23]:
classids['n01806143']


Out[23]:
'peacock'

In [24]:
syn_wv = [(k, lc_w2v[v.lower()]) for k,v in classids.items()
          if v.lower() in lc_w2v]
syn_wv_1k = [(k, lc_w2v[v.lower()]) for k,v in classids_1k.items()
          if v.lower() in lc_w2v]
syn2wv = dict(syn_wv); len(syn2wv)


Out[24]:
51640

In [25]:
nomatch = [v[0] for v in class_dict.values() if v[0] not in syn2wv]
len(nomatch)


Out[25]:
226

In [26]:
# nm_path=path+'train_nm/'
# os.mkdir(nm_path)
# for nm in nomatch: os.rename(path+'train/'+nm, nm_path+nm)

In [27]:
ndim = len(list(syn2wv.values())[0]); ndim


Out[27]:
300

Resize images

Now that we've got our word vectors, we need a model that can create image vectors. It's nearly always best to start with a pre-train image model, and these require a specific size input. We'll be using resnet, which requires 224x224 sized images. Reading jpegs and resizing them can be slow, so we'll store the result of this.

First we create the filename list for the imagenet archive:


In [28]:
fnames = list(glob.iglob(path+'train/*/*.JPEG'))
pickle.dump(fnames, open(path+'fnames.pkl', 'wb'))

Even scanning a large collection of files is slow, so we save the filenames:


In [29]:
fnames = pickle.load(open(path+'fnames.pkl', 'rb'))

In [30]:
fnames = np.random.permutation(fnames)

In [31]:
pickle.dump(fnames, open(path+'fnames_r.pkl', 'wb'))

In [32]:
fnames = pickle.load(open(path+'fnames_r.pkl', 'rb'))

In [33]:
new_s = 224 # height and width to resize to
n = len(fnames); n


Out[33]:
19439

In [34]:
# bc_path = f'{dpath}/trn_resized_{new_s}_r.bc'

In [35]:
# bc_path = f'{path}/results/trn_resized_{new_s}_r.bc'
bc_path = '{}/trn_resized_{}_r.bc'.format(path, new_s)

Using pillow to resize the image (recommendation: install pillow-simd for 600% speedup). To install, force remove the conda installed version, then:

CC="cc -mavx2" pip install -U --force-reinstall pillow-simd

In [36]:
def _resize(img):
    shortest = min(img.width,img.height)
    resized = np.round(np.multiply(new_s/shortest, img.size)).astype(int)
    return img.resize(resized, Image.BILINEAR)

In [37]:
def resize_img(i):
    img = Image.open(fnames[i])
    s = np.array(img).shape
    if len(s)!=3 or s[2]!=3: return
    return _resize(img)

In [38]:
def resize_img_bw(i):
    return _resize(Image.open(fnames[i]).convert('L'))

Pre-allocate memory in threadlocal storage


In [39]:
tl = threading.local()

In [40]:
tl.place = np.zeros((new_s,new_s,3), 'uint8')
#tl.place = np.zeros((new_s,new_s), 'uint8')

Bcolz is amazingly fast, easy to use, and provides a largely numpy-compatible interface. It creates file-backed arrays and are transparently cached in memory.

Create (or open) compressed array for our resized images


In [41]:
arr = bcolz.carray(np.empty((0, new_s, new_s, 3), 'float32'), 
                   chunklen=16, mode='w', rootdir=bc_path)

Function that appends resized image with black border added to longer axis


In [42]:
def get_slice(p, n): return slice((p-n+1)//2, p-(p-n)//2)

def app_img(r):
    tl.place[:] = (np.array(r)[get_slice(r.size[1],new_s), get_slice(r.size[0],new_s)] 
        if r else 0.)
    arr.append(tl.place)

In [43]:
# Serial version
for i in range(2000): app_img(resize_img(i))
arr.flush()

In [44]:
# Parallel version
step=6400
for i in tqdm(range(0, n, step)):
    with ThreadPoolExecutor(max_workers=16) as execr:
        res = execr.map(resize_img, range(i, min(i+step, n)))
        for r in res: app_img(r)
    arr.flush()


100%|██████████| 4/4 [00:29<00:00,  6.96s/it]

Times to process 2000 images that aren't in filesystem cache (tpe==ThreadPoolExecutor, ppe==ProcessPoolExecutor; number shows #jobs)


In [45]:
times = [('tpe 16', 3.22), ('tpe 12', 3.65), ('ppe 12', 3.97), ('ppe 8 ', 4.47), 
         ('ppe 6 ', 4.89), ('ppe 3 ', 8.03), ('serial', 25.3)]

column_chart(*zip(*times))



In [46]:
arr = bcolz.open(bc_path)

In [47]:
plt.imshow(arr[-2].astype('uint8'))


Out[47]:
<matplotlib.image.AxesImage at 0x7f971e30ed30>

We do our prototyping in a notebook, and then use 'Download as->Notebook' to get a python script we can run under tmux. Notebooks are great for running small experiments, since it's easy to make lots of changes and inspect the results in a wide variety of ways.

Create model

Now we're ready to create our first model. Step one: create our target labels, which is simply a case of grabbing the synset id from the filename, and looking up the word vector for each.


In [48]:
def get_synset(f): return f[f.rfind('/')+1:f.find('_')]

labels = list(map(get_synset, fnames))
labels[:5]


Out[48]:
['n02058221', 'n01871265', 'n03100240', 'n02113624', 'n02006656']

In [49]:
vecs = np.stack([syn2wv[l] for l in labels]); vecs.shape


Out[49]:
(19439, 300)

We'll be using resnet as our model for these experiments.


In [50]:
rn_mean = np.array([123.68, 116.779, 103.939], dtype=np.float32).reshape((1,1,3))
inp = Input((224,224,3))
preproc = Lambda(lambda x: (x - rn_mean)[:, :, :, ::-1])(inp)
model = ResNet50(include_top=False, input_tensor=preproc)

In order to make each step faster, we'll save a couple of intermediate activations that we'll be using shortly. First, the last layer before the final convolutional bottleneck:


In [51]:
mid_start = model.get_layer('res5b_branch2a')
mid_out = model.layers[model.layers.index(mid_start)-1]
shp=mid_out.output_shape; shp


Out[51]:
(None, 7, 7, 2048)

We put an average pooling layer on top to make it a more managable size.


In [52]:
rn_top = Model(model.input, mid_out.output)
rn_top_avg = Sequential([rn_top, AveragePooling2D((7,7))])

In [53]:
shp=rn_top_avg.output_shape; shp


Out[53]:
(None, 1, 1, 2048)

We create this intermediate array a batch at a time, so we don't have to keep it in memory.


In [54]:
# features_mid = bcolz.open(path+'results/features_mid_1c_r.bc')

In [55]:
features_mid = bcolz.carray(np.empty((0,)+shp[1:]), rootdir=path+'results/features_mid_1c_r.bc',
                           chunklen=16, mode='w')

In [56]:
def gen_features_mid(dirn):
    gen = (arr[i:min(i+128,n)] for i in range(0, len(arr), 128))
    for i,batch in tqdm(enumerate(gen)):
        features_mid.append(rn_top_avg.predict_on_batch(batch[:,:,::dirn]))
        if (i%100==99): features_mid.flush()
    features_mid.flush()

In [57]:
gen_features_mid(1)


168it [01:52,  1.49it/s]

In [58]:
gen_features_mid(-1)


168it [02:02,  1.38it/s]

In [59]:
features_mid.shape


Out[59]:
(38878, 1, 1, 2048)

Our final layers match the original resnet, although we add on extra resnet block at the top as well.


In [60]:
rn_bot_inp = Input(shp[1:])
x = rn_bot_inp
# x = identity_block(x, 3, [256, 256, 1024], stage=4, block='f')
# x = conv_block(x, 3, [512, 512, 2048], stage=5, block='a')
x = identity_block(x, 3, [512, 512, 2048], stage=5, block='b')
x = identity_block(x, 3, [512, 512, 2048], stage=5, block='c')
x = Flatten()(x)
rn_bot = Model(rn_bot_inp, x)
rn_bot.output_shape


Out[60]:
(None, 2048)

In [61]:
for i in range(len(rn_bot.layers)-2):
    rn_bot.layers[-i-2].set_weights(model.layers[-i-2].get_weights())

We save this layer's results too, although it's smaller so should fit in RAM.


In [62]:
%time features_last = rn_bot.predict(features_mid, batch_size=128)


CPU times: user 3.16 s, sys: 252 ms, total: 3.42 s
Wall time: 2.91 s

In [63]:
features_last = bcolz.carray(features_last, rootdir=path+'results/features_last_r.bc', 
                             chunklen=64, mode='w')

In [64]:
features_last = bcolz.open(path+'results/features_last_r.bc')[:]

We add a linear model on top to predict our word vectors.


In [65]:
lm_inp = Input(shape=(2048,))
lm = Model(lm_inp, Dense(ndim)(lm_inp))

cosine distance is a good choice for anything involving nearest neighbors (which we'll use later).


In [66]:
def cos_distance(y_true, y_pred):
    y_true = K.l2_normalize(y_true, axis=-1)
    y_pred = K.l2_normalize(y_pred, axis=-1)
    return K.mean(1 - K.sum((y_true * y_pred), axis=-1))

In [67]:
lm.compile('adam','cosine_proximity')

In [68]:
v = np.concatenate([vecs, vecs])

In [69]:
lm.evaluate(features_last, v, verbose=0)


Out[69]:
-1.286830739647744e-05

In [70]:
# K.set_value(lm.optimizer.lr, 1e-5)

In [71]:
lm.fit(features_last, v, verbose=2, epochs=3)


Epoch 1/3
3s - loss: -1.1531e-03
Epoch 2/3
3s - loss: -1.2519e-03
Epoch 3/3
3s - loss: -1.3052e-03
Out[71]:
<keras.callbacks.History at 0x7f96a3d8db70>

Be sure to save intermediate weights, to avoid recalculating them


In [72]:
lm.save_weights(path+'results/lm_cos.h5')

In [73]:
lm.load_weights(path+'results/lm_cos.h5')

Nearest Neighbors

Let's use nearest neighbors to look at a couple of examples, to see how well it's working. The first NN will be just looking at the word vectors of the 1,000 imagenet competition categories.


In [74]:
syns, wvs = list(zip(*syn_wv_1k))
wvs = np.array(wvs)

In [75]:
nn = NearestNeighbors(3, metric='cosine', algorithm='brute').fit(wvs)

In [76]:
nn = LSHForest(20, n_neighbors=3).fit(wvs)

In [77]:
%time pred_wv = lm.predict(features_last[:10000])


CPU times: user 720 ms, sys: 48 ms, total: 768 ms
Wall time: 550 ms

In [78]:
%time dist, idxs = nn.kneighbors(pred_wv)


CPU times: user 1min 32s, sys: 6min 31s, total: 8min 4s
Wall time: 42.3 s

In [79]:
[[classids[syns[id]] for id in ids] for ids in idxs[190:200]]


Out[79]:
[['garter_snake', 'irish_wolfhound', 'brittany_spaniel'],
 ['wolf_spider', 'isopod', 'proboscis_monkey'],
 ['wolf_spider', 'proboscis_monkey', 'garter_snake'],
 ['tray', 'warthog', 'crate'],
 ['fiddler_crab', 'isopod', 'proboscis_monkey'],
 ['garter_snake', 'wolf_spider', 'arctic_fox'],
 ['wolf_spider', 'stinkhorn', 'garter_snake'],
 ['wolf_spider', 'garter_snake', 'siamang'],
 ['eel', 'crayfish', 'isopod'],
 ['proboscis_monkey', 'fiddler_crab', 'garter_snake']]

In [80]:
plt.imshow(arr[190].astype('uint8'))


Out[80]:
<matplotlib.image.AxesImage at 0x7f96fb744550>

A much harder task is to look up every wordnet synset id.


In [81]:
all_syns, all_wvs = list(zip(*syn_wv))
all_wvs = np.array(all_wvs)

In [82]:
all_nn = LSHForest(20, n_neighbors=3).fit(all_wvs)

In [83]:
%time dist, idxs = all_nn.kneighbors(pred_wv[:200])


CPU times: user 12.6 s, sys: 50.9 s, total: 1min 3s
Wall time: 5.41 s

In [84]:
[[classids[all_syns[id]] for id in ids] for ids in idxs[190:200]]


Out[84]:
[['critter', 'brittany_spaniel', 'fox_terrier'],
 ['dishpan', 'corncrib', 'redheaded_woodpecker'],
 ['belted_kingfisher', 'squirrel', 'squirrel'],
 ['hubbard_squash', 'hubbard_squash', 'crudites'],
 ['stinkhorn', 'corncrib', 'redheaded_woodpecker'],
 ['staghorn_fern', 'deutzia', 'teasel'],
 ['dishpan', 'corncrib', 'toad_lily'],
 ['summer_tanager', 'snowy_egret', 'redheaded_woodpecker'],
 ['eel', 'eel', 'spoonbill'],
 ['pinecone', 'binturong', 'andean_condor']]

Fine tune

To improve things, let's fine tune more layers.


In [85]:
lm_inp2 = Input(shape=(2048,))
lm2 = Model(lm_inp2, Dense(ndim)(lm_inp2))

In [86]:
for l1,l2 in zip(lm.layers,lm2.layers): l2.set_weights(l1.get_weights())

In [87]:
rn_bot_seq = Sequential([rn_bot, lm2])
rn_bot_seq.compile('adam', 'cosine_proximity')
rn_bot_seq.output_shape


Out[87]:
(None, 300)

In [88]:
bc_it = BcolzArrayIterator(features_mid, v, shuffle=True, batch_size=128)

In [89]:
K.set_value(rn_bot_seq.optimizer.lr, 1e-3)

In [90]:
steps_per_epoch = int(np.ceil(bc_it.N/128))  # useful for the following Keras 2 fit_generator

In [91]:
rn_bot_seq.fit_generator(bc_it, steps_per_epoch, verbose=2, epochs=4)


Epoch 1/4
7s - loss: -1.3214e-03
Epoch 2/4
6s - loss: -1.4257e-03
Epoch 3/4
6s - loss: -1.5225e-03
Epoch 4/4
6s - loss: -1.6276e-03
Out[91]:
<keras.callbacks.History at 0x7f9712c70b70>

In [92]:
K.set_value(rn_bot_seq.optimizer.lr, 1e-4)

In [93]:
rn_bot_seq.fit_generator(bc_it, steps_per_epoch, verbose=2, epochs=8)


Epoch 1/8
6s - loss: -1.7890e-03
Epoch 2/8
6s - loss: -1.8479e-03
Epoch 3/8
6s - loss: -1.8858e-03
Epoch 4/8
6s - loss: -1.9164e-03
Epoch 5/8
6s - loss: -1.9434e-03
Epoch 6/8
6s - loss: -1.9681e-03
Epoch 7/8
6s - loss: -1.9958e-03
Epoch 8/8
6s - loss: -2.0139e-03
Out[93]:
<keras.callbacks.History at 0x7f9712c70710>

In [94]:
K.set_value(rn_bot_seq.optimizer.lr, 1e-5)

In [95]:
rn_bot_seq.fit_generator(bc_it, steps_per_epoch, verbose=2, epochs=5)


Epoch 1/5
6s - loss: -2.0466e-03
Epoch 2/5
6s - loss: -2.0564e-03
Epoch 3/5
6s - loss: -2.0551e-03
Epoch 4/5
6s - loss: -2.0608e-03
Epoch 5/5
6s - loss: -2.0626e-03
Out[95]:
<keras.callbacks.History at 0x7f971c31af98>

In [96]:
rn_bot_seq.evaluate(features_mid, v, verbose=2)


Out[96]:
-0.0021015258896023691

In [97]:
rn_bot_seq.save_weights(path+'results/rn_bot_seq_cos.h5')

In [98]:
rn_bot_seq.load_weights(path+'results/rn_bot_seq_cos.h5')

KNN again


In [99]:
%time pred_wv = rn_bot_seq.predict(features_mid)


CPU times: user 7.86 s, sys: 812 ms, total: 8.67 s
Wall time: 6.68 s

In [100]:
rng = slice(190,200)

In [101]:
dist, idxs = nn.kneighbors(pred_wv[rng])

In [102]:
[[classids[syns[id]] for id in ids] for ids in idxs]


Out[102]:
[['dingo', 'garter_snake', 'coyote'],
 ['tractor', 'minivan', 'jeep'],
 ['spiny_lobster', 'spoonbill', 'crayfish'],
 ['tray', 'crate', 'tub'],
 ['wolf_spider', 'isopod', 'irish_wolfhound'],
 ['rapeseed', 'racer', 'sports_car'],
 ['crane', 'crane', 'paintbrush'],
 ['wolf_spider', 'garter_snake', 'fox_squirrel'],
 ['eel', 'crayfish', 'sea_cucumber'],
 ['bookshop', 'bittern', 'coho']]

In [103]:
dist, idxs = all_nn.kneighbors(pred_wv[rng])

In [104]:
[[classids[all_syns[id]] for id in ids] for ids in idxs]


Out[104]:
[['garter_snake', 'raccoon', 'raccoon'],
 ['tractor', 'tractor', 'dump_truck'],
 ['spiny_lobster', 'spiny_lobster', 'crayfish'],
 ['tray', 'pancake_turner', 'allen_wrench'],
 ['stinkhorn', 'redheaded_woodpecker', 'bufflehead'],
 ['racer', 'racer', 'racer'],
 ['crane', 'crane', 'crane'],
 ['wolf_spider', 'garter_snake', 'fox_squirrel'],
 ['eel', 'eel', 'squid'],
 ['bookshop', 'jewfish', 'heronry']]

In [105]:
plt.imshow(arr[rng][1].astype('uint8'))


Out[105]:
<matplotlib.image.AxesImage at 0x7f9702c82358>

Text -> Image

Something very nice about this kind of model is we can go in the other direction as well - find images similar to a word or phrase!


In [106]:
img_nn = NearestNeighbors(3, metric='cosine', algorithm='brute').fit(pred_wv)

In [107]:
img_nn2 = LSHForest(20, n_neighbors=3).fit(pred_wv)

In [108]:
word = 'boat'
vec = w2v_dict[word]
# dist, idxs = img_nn.kneighbors(vec.reshape(1,-1))
dist, idxs = img_nn2.kneighbors(vec.reshape(1,-1))

In [109]:
ims = [Image.open(fnames[fn%n]) for fn in idxs[0]]
display(*ims)



In [110]:
vec = (w2v_dict['engine'] + w2v_dict['boat'])/2
dist, idxs = img_nn.kneighbors(vec.reshape(1,-1))

In [111]:
def slerp(val, low, high):
    """Spherical interpolation. val has a range of 0 to 1."""
    if val <= 0: return low
    elif val >= 1: return high
    omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)))
    so = np.sin(omega)
    return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high

In [112]:
vec = slerp(0.5, w2v_dict['paddle'], w2v_dict['boat'])
dist, idxs = img_nn.kneighbors(vec.reshape(1,-1))

Image -> image

Since that worked so well, let's try to find images with similar content to another image...


In [113]:
ft_model = Sequential([rn_top_avg, rn_bot_seq])

In [114]:
# new_file = '/data/jhoward/imagenet/full/valid/n01498041/ILSVRC2012_val_00005642.JPEG'
new_file = './data/imagenet/valid/n01498041/n01498041_4754.JPEG'

In [115]:
# new_file = '/data/jhoward/imagenet/full/valid/n01440764/ILSVRC2012_val_00007197.JPEG'
new_file = './data/imagenet/valid/n01440764/n01440764_6878.JPEG'

In [116]:
new_im = Image.open(new_file).resize((224,224), Image.BILINEAR); new_im


Out[116]:

In [117]:
vec = ft_model.predict(np.expand_dims(new_im, 0))

In [118]:
dist, idxs = img_nn2.kneighbors(vec)

In [119]:
ims = [Image.open(fnames[fn%n]) for fn in idxs[0]]
display(*ims)



In [ ]: