WH Nixalo - 02 Aug 2017


In [20]:
import keras
from keras.models import Model
from keras.layers import Dense, Input, Convolution2D
from keras.applications.imagenet_utils import _obtain_input_shape
from keras import backend as K

In [92]:
input_shape = (224, 224, 3)
img_input = Input(shape=input_shape, name='blah-input')
# x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation="relu", name="block1_conv1", padding="same")(img_input)
x = Dense(1024, activation='relu', name='fc1')(x)
x = Dense(256, activation='relu', name= 'fc2')(x)

In [93]:
img_input


Out[93]:
<tf.Tensor 'blah-input_5:0' shape=(?, 224, 224, 3) dtype=float32>

In [94]:
xModel = Model(img_input, x, name='xmodel')

In [95]:
# model = Model(img_input, xModel.get_layer('block1_conv1').output)
model = Model(img_input, xModel.get_layer('fc2').output)

In [96]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
blah-input (InputLayer)      (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
fc1 (Dense)                  (None, 224, 224, 1024)    66560     
_________________________________________________________________
fc2 (Dense)                  (None, 224, 224, 256)     262400    
=================================================================
Total params: 330,752
Trainable params: 330,752
Non-trainable params: 0
_________________________________________________________________

Aaaahaaaaaaaa. Okay. so the .output parameter for kears.models.Model(..) will take all layers of a model up to and including the layer specified.

It does NOT create a model of only the layer specified. input is a keras tensor (with attributes: name, shape, dtype). output is also a keras tensor with attributes (name-&-activation-fn, shape, dtype).

So, a model is created consisting of all layers between and including input and output layers.


In [97]:
xModel.get_layer('fc1').output


Out[97]:
<tf.Tensor 'fc1_5/Relu:0' shape=(?, 224, 224, 1024) dtype=float32>

In [98]:
model.layers


Out[98]:
[<keras.engine.topology.InputLayer at 0x119081be0>,
 <keras.layers.convolutional.Conv2D at 0x119081c50>,
 <keras.layers.core.Dense at 0x11909ecf8>,
 <keras.layers.core.Dense at 0x11909e358>]

I'm still a bit unclear whether input and output have to be from the same original model. What if I'm making a new model taking the input of one and the output of another? Checking out below:


In [99]:
input_shape = (224,224,3)
input_shape = _obtain_input_shape(input_shape, default_size=224, min_size=48, 
                                  include_top=False, data_format='float32')
input_shape


Out[99]:
(224, 224, 3)

In [103]:
img_input_2 = Input(shape=input_shape, name='blah-2-input')
 = Dense(1024, activation='relu',name='2fc1')(img_input_2)
 = Dense(512, activation='relu', name='2fc2')()
 = Dense(256, activation='relu', name='2fc3')()

ჯModel = Model(img_input_2, , name='ჯmodel')

In [106]:
kerlaModel = Model(img_input_2, xModel.get_layer('fc1').output)
# kerlaModel_1 = Model(img_input, xModel.get_layer('fc1').output)
# kerlaModel_2 = Model(img_input_2, ჯModel.get_layer('2fc2').output)


---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-106-8f0080719019> in <module>()
----> 1 kerlaModel = Model(img_input_2, xModel.get_layer('fc1').output)
      2 # kerlaModel_1 = Model(img_input, xModel.get_layer('fc1').output)
      3 # kerlaModel_2 = Model(img_input_2, ჯModel.get_layer('2fc2').output)

/Users/WayNoxchi/Miniconda3/lib/python3.6/site-packages/keras/legacy/interfaces.py in wrapper(*args, **kwargs)
     86                 warnings.warn('Update your `' + object_name +
     87                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 88             return func(*args, **kwargs)
     89         wrapper._legacy_support_signature = inspect.getargspec(func)
     90         return wrapper

/Users/WayNoxchi/Miniconda3/lib/python3.6/site-packages/keras/engine/topology.py in __init__(self, inputs, outputs, name)
   1746                                 'The following previous layers '
   1747                                 'were accessed without issue: ' +
-> 1748                                 str(layers_with_complete_input))
   1749                     for x in node.output_tensors:
   1750                         computable_tensors.append(x)

RuntimeError: Graph disconnected: cannot obtain value for tensor Tensor("blah-input_5:0", shape=(?, 224, 224, 3), dtype=float32) at layer "blah-input". The following previous layers were accessed without issue: []

The last 2, kerlamodel_1/2 both work. As expected, kerlaModel doesn't work. Also looking at this closed keras issue: Feeding input to intermediate layer fails with Graph disconnected Exception #5074 -- it's as I thought. The computation graph has to be one thing - it has to be connected.

So if at some point x had a layer from ჯ or vice versa, we'd be good, but they don't. Cool, so I learned a bit more about how Keras works. And it makes sense.


In [ ]: