Utilizando la librería KERAS

En este Notebook vamos a utlizar la librería Keras para realizar un entrenamiento de una red convolucional de los datos recogidos en CIFAR10.

Los componentes que vamos a usar son la Convolución 2D y el MaxPooling2D, ya vistos en anteriores Notebooks. Y vamos a empezar a publicar nuevos "procesamientos":

  • RELU
  • DropOut

Convolución 2D

Parámetros de entrada

nb_filter:

stack_size: RGB -> 3, Gris -> 1

nb_row: Tamaño (filas) de la máscara (W)

nb_col: Tamaño (columnas) de la máscara (W)

init='uniform': Pueden ser uniform, normal, lecun_uniform, orthogonal

activation='linear': Pueden ser softmax, softplus, relu, tanh, sigmoid, linear...

weights=None

image_shape=None

border_mode='valid': Puede ser valid o full

subsample=(1,1): Subsampleado empleado en la función conv2D de theano

RELU

Es un tipo de activación del tipo:

(x+abs(x))/2.0

Referencia sobre las ventajas del uso de las activaciones tipo Rectificadores lineales:

The advantages of using Rectified Linear Units in neural networks are:

  • If hard max function is used as activation function, it induces the sparsity in the hidden units.
  • ReLU doesn't face gradient vanishing problem as with sigmoid and tanh function. Also, It has been shown that deep networks can be trained efficiently using ReLU even without pre-training.
  • ReLU can be used in Restricted Boltzmann machine to model real/integer valued inputs.
  • http://www.quora.com/What-is-special-about-rectifier-neural-units-used-in-NN-learning

    DropOut

    Es un método simple para evitar el overfitting.

    Implementación

    Cargamos las librerias necesarias

    
    
    In [ ]:
    from keras.datasets import cifar10
    from keras.preprocessing.image import ImageDataGenerator
    from keras.models import Sequential
    from keras.layers.core import Dense, Dropout, Activation, Flatten
    from keras.layers.convolutional import Convolution2D, MaxPooling2D
    from keras.optimizers import SGD, Adadelta, Adagrad
    from keras.utils import np_utils, generic_utils
    

    Parámetros

    
    
    In [ ]:
    batch_size = 1000
    nb_classes = 10
    nb_epoch = 2
    data_augmentation = False
    

    Cargamos los datos

    
    
    In [ ]:
    (X_train, y_train), (X_test, y_test) = cifar10.load_data(test_split=0.1)
    print X_train.shape[0], 'train samples'
    print X_test.shape[0], 'test samples'
    

    Convertimos los vectores a matrices binarias

    
    
    In [ ]:
    Y_train = np_utils.to_categorical(y_train, nb_classes)
    Y_test = np_utils.to_categorical(y_test, nb_classes)
    

    Creamos el modelo

    
    
    In [ ]:
    model = Sequential()
    
    model.add(Convolution2D(32, 3, 3, 3, border_mode='full')) 
    model.add(Activation('relu'))
    model.add(Convolution2D(32, 32, 3, 3))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(poolsize=(2, 2)))
    model.add(Dropout(0.25))
    
    model.add(Convolution2D(64, 32, 3, 3, border_mode='full')) 
    model.add(Activation('relu'))
    model.add(Convolution2D(64, 64, 3, 3)) 
    model.add(Activation('relu'))
    model.add(MaxPooling2D(poolsize=(2, 2)))
    model.add(Dropout(0.25))
    
    model.add(Flatten(64*8*8))
    model.add(Dense(64*8*8, 512, init='normal'))
    model.add(Activation('relu'))
    model.add(Dropout(0.5))
    
    model.add(Dense(512, nb_classes, init='normal'))
    model.add(Activation('softmax'))
    

    Entrenamos el modelo con SGD

    
    
    In [ ]:
    sgd = SGD(lr=0.01, decay=1e-7, momentum=0.9, nesterov=True)
    model.compile(loss='categorical_crossentropy', optimizer=sgd)
    

    Entrenamos

    
    
    In [ ]:
    if not data_augmentation:
        print "Not using data augmentation or normalization"
    
        X_train = X_train.astype("float32")
        X_test = X_train.astype("float32")
        X_train /= 255
        X_test /= 255
        print X_train[0]
        model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=10)
        score = model.evaluate(X_test, Y_test, batch_size=batch_size)
        print 'Test score:', score
    else:
        print "Using real time data augmentation"
    
        # this will do preprocessing and realtime data augmentation
        datagen = ImageDataGenerator(
            featurewise_center=True, # set input mean to 0 over the dataset
            samplewise_center=False, # set each sample mean to 0
            featurewise_std_normalization=True, # divide inputs by std of the dataset
            samplewise_std_normalization=False, # divide each input by its std
            zca_whitening=False, # apply ZCA whitening
            rotation_range=20, # randomly rotate images in the range (degrees, 0 to 180)
            width_shift_range=0.2, # randomly shift images horizontally (fraction of total width)
            height_shift_range=0.2, # randomly shift images vertically (fraction of total height)
            horizontal_flip=True, # randomly flip images
            vertical_flip=False) # randomly flip images
    
        # compute quantities required for featurewise normalization 
        # (std, mean, and principal components if ZCA whitening is applied)
        datagen.fit(X_train)
    
        for e in range(nb_epoch):
            print '-'*40
            print 'Epoch', e
            print '-'*40
            print "Training..."
            # batch train with realtime data augmentation
            progbar = generic_utils.Progbar(X_train.shape[0])
            for X_batch, Y_batch in datagen.flow(X_train, Y_train):
                loss = model.train(X_batch, Y_batch)
                progbar.add(X_batch.shape[0], values=[("train loss", loss)])
    

    Testeamos

    
    
    In [ ]:
    print "Testing..."
    # test time!
    progbar = generic_utils.Progbar(X_test.shape[0])
    for X_batch, Y_batch in datagen.flow(X_test, Y_test):
        score = model.test(X_batch, Y_batch)
        progbar.add(X_batch.shape[0], values=[("test loss", score)])