Numpy CNN

Made this with a friend to educate/play with them wrt how ANNs works. Figured it'd be usful to others as well.

First a simple keras network for mnist

simplest network I can think of as a POC


In [1]:
import keras
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Dense, Dropout, Flatten, Input, Conv2D, MaxPooling2D
from keras import backend as K


Using TensorFlow backend.
/usr/local/Cellar/python3/3.6.0/Frameworks/Python.framework/Versions/3.6/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)

In [2]:
(x_train, y_train), (x_test, y_test) = mnist.load_data()


Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11255808/11490434 [============================>.] - ETA: 0ss

In [3]:
from PIL import Image
Image.fromarray(x_train[0]).resize((256,256))


Out[3]:

In [4]:
y_train[0]


Out[4]:
5

In [5]:
K.image_data_format()


Out[5]:
'channels_last'

In [6]:
batch_size = 128
num_classes = 10
epochs = 1

In [7]:
# Fiddle with X

# input image dimensions
img_rows, img_cols = 28, 28

x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255

In [8]:
x_train[0].max(), x_train[0].min()


Out[8]:
(1.0, 0.0)

In [9]:
# Fiddle with Y

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

In [10]:
y_train[0]


Out[10]:
array([ 0.,  0.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  0.])

In [11]:
x_train.shape


Out[11]:
(60000, 28, 28, 1)

In [12]:
# inputs
mnist_input = Input(shape=(28, 28, 1))

In [13]:
def makeDaModel():
    conv1 = Conv2D(32, kernel_size=(3, 3), activation='relu')(mnist_input)
    conv2 = Conv2D(64, (3, 3), activation='relu')(conv1)
    maxP1 = MaxPooling2D(pool_size=(2, 2))(conv2)
    drop1 = Dropout(0.25)(maxP1)
    flat = Flatten()(drop1)
    dense1 = Dense(128, activation='relu')(flat)
    drop2 = Dropout(0.5)(dense1)
    dense2 = Dense(num_classes, activation='softmax')(drop2)

    model = Model(inputs=mnist_input, outputs=dense2)

    model.compile(
        loss=keras.losses.categorical_crossentropy,
        optimizer=keras.optimizers.Adadelta(),
        metrics=['accuracy']
    )
    return model

In [14]:
model = makeDaModel()

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test))

score = model.evaluate(x_test, y_test, verbose=0)

print('Test loss:', score[0])
print('Test accuracy:', score[1])


Train on 60000 samples, validate on 10000 samples
Epoch 1/1
60000/60000 [==============================] - 168s - loss: 0.3380 - acc: 0.8969 - val_loss: 0.0781 - val_acc: 0.9755
Test loss: 0.0781127680926
Test accuracy: 0.9755

In [15]:
model.summary()


_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 28, 28, 1)         0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 12, 12, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               1179776   
_________________________________________________________________
dropout_2 (Dropout)          (None, 128)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
_________________________________________________________________

In [16]:
model.layers[1].get_weights()[0].shape


Out[16]:
(3, 3, 1, 32)

Okay now for the good stuff, a pure numpy version

For easy interpretations on exactal how these things work


In [17]:
import numpy as np

In [18]:
# first let's give outselves some vars
# need to give outselves some Xavier init'ed weights 
# http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

from functools import reduce

def xavier(shape):
    fan_in = reduce(lambda x, y: x*y, list(shape)[:-2]) * list(shape)[-2] # number of input units
    fan_out = reduce(lambda x, y: x*y, list(shape)[:-2]) * list(shape)[-1] # number of output units
    limit = np.sqrt(3.0 / ((fan_in + fan_out) / 2))
    print("fan_in: {}, fan_out: {}, limit: {}".format(fan_in,fan_out,limit))
    return np.random.uniform(-limit, limit, shape)

# glorot_uniform((28, 28, 1), (26, 26, 32))

In [19]:
conv1W = xavier((3, 3, 1, 32))


fan_in: 9, fan_out: 288, limit: 0.1421338109037403

In [20]:
conv1W.shape


Out[20]:
(3, 3, 1, 32)

Alright let's check my work



In [21]:
model = makeDaModel()

In [22]:
model.layers[1].get_weights()[0].mean(), conv1W.mean()


Out[22]:
(0.0014878797, -0.0011125192337496379)

In [23]:
model.layers[1].get_weights()[0].max(), conv1W.max()


Out[23]:
(0.14111416, 0.1410259892107972)

In [24]:
model.layers[1].get_weights()[0].min(), conv1W.min()


Out[24]:
(-0.14202501, -0.14203956681735094)

In [25]:
model.layers[1].get_weights()[0].std(), conv1W.std()


Out[25]:
(0.085779622, 0.07593535136279192)

In [26]:
model.layers[1].get_weights()[0].shape == conv1W.shape


Out[26]:
True

Seems legit



In [ ]:


In [ ]:


In [ ]:


In [ ]: