Image Net Preprocessing

Notebook di processamento delle immagini di Image Net. Obiettivo è realizzare un batch input che, sfruttando il meccasnismo a code descritto in Tensorflow, fornisca batch della dimensione desiderata per il numero di epoche desiderato.

Viene inoltre sfruttanto l'algoritmo di Inception preprocessing per fornire in input immagini della dimensione corretta con le correzioni preaddestramento fornite da Tensorflow


In [1]:
!rm -rf /tmp/ImageNetTrainTransfer

In [2]:
#Import
import pandas as pd
import numpy as np
import os
import tensorflow as tf
import random
from PIL import Image
#Inception preprocessing code from https://github.com/tensorflow/models/blob/master/slim/preprocessing/inception_preprocessing.py
#useful to maintain training dimension
from utils import inception_preprocessing
import sys

#from inception import inception
'''
Uso di slim e nets_factory (come per SLIM Tensorflow https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py)
per il ripristino della rete. 

Le reti devono essere censite in nets_factory (v. struttura file nella directory di questo notebook)
'''

slim = tf.contrib.slim
from nets import nets_factory

In [3]:
#Global Variables
IMAGE_NET_ROOT_PATH = '/home/carnd/transfer-learning-utils/tiny-imagenet-200/'
#IMAGE_NET_ROOT_PATH = '/data/lgrazioli/'
IMAGE_NET_LABELS_PATH = IMAGE_NET_ROOT_PATH + 'words.txt'
IMAGE_NET_TRAIN_PATH = IMAGE_NET_ROOT_PATH + 'train/'
TRAINING_CHECKPOINT_DIR = '/tmp/ImageNetTrainTransfer'
#Transfer learning CHECKPOINT PATH
#File ckpt della rete
CHECKPOINT_PATH = '/home/carnd/transfer-learning-utils/inception_v4.ckpt'

Lettura file words di ImageNet

Lettura del file words di ImageNet come PandaDF. A ogni id (cartella che contiene immagini per le classi fornite) vengono assegnati i label


In [4]:
#Reading label file as Panda dataframe
labels_df = pd.read_csv(IMAGE_NET_LABELS_PATH, sep='\\t', header=None, names=['id','labels'])
labels_df.head(5)


/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/ipykernel/__main__.py:2: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators (separators > 1 char and different from '\s+' are interpreted as regex); you can avoid this warning by specifying engine='python'.
  from ipykernel import kernelapp as app
Out[4]:
id labels
0 n00001740 entity
1 n00001930 physical entity
2 n00002137 abstraction, abstract entity
3 n00002452 thing
4 n00002684 object, physical object

In [5]:
labels_df.count()


Out[5]:
id        82115
labels    82114
dtype: int64

Aggiunta colonna di lunghezza del label (quante classi contiene ogni label).


In [6]:
#new_labels = []
labels_lengths = []
for idx, row in labels_df.iterrows():
    #Convertire a stringa perchè alcuni sono float
    current_labels = tuple(str(row['labels']).split(','))
    #new_labels.append(current_labels)
    labels_lengths.append(len(current_labels))

In [7]:
labels_df['labels_length'] = labels_lengths
labels_indices = [idx for idx, _ in labels_df.iterrows()]
labels_df['indices'] = labels_indices

In [8]:
labels_df.head(20)


Out[8]:
id labels labels_length indices
0 n00001740 entity 1 0
1 n00001930 physical entity 1 1
2 n00002137 abstraction, abstract entity 2 2
3 n00002452 thing 1 3
4 n00002684 object, physical object 2 4
5 n00003553 whole, unit 2 5
6 n00003993 congener 1 6
7 n00004258 living thing, animate thing 2 7
8 n00004475 organism, being 2 8
9 n00005787 benthos 1 9
10 n00005930 dwarf 1 10
11 n00006024 heterotroph 1 11
12 n00006150 parent 1 12
13 n00006269 life 1 13
14 n00006400 biont 1 14
15 n00006484 cell 1 15
16 n00007347 causal agent, cause, causal agency 3 16
17 n00007846 person, individual, someone, somebody, mortal,... 6 17
18 n00015388 animal, animate being, beast, brute, creature,... 6 18
19 n00017222 plant, flora, plant life 3 19

Train DF

Panda Dataframe che contiene i path di tutte le immagini, la relativa classe, id dell'immagine e classe. La classe viene ottenuta tramite lookup su labels_df (tale operazione pesa molto in termini di tempi di esecuzione)

Può richiedere del tempo. Per lanciare su un campione si può bloccare a un determinato valore di idx


In [9]:
train_paths = []
for idx, label_dir in enumerate(os.listdir(IMAGE_NET_TRAIN_PATH)):
    image_dir_path = IMAGE_NET_TRAIN_PATH + label_dir + '/images/'
    print("Processing label {0}".format(label_dir))
    for image in os.listdir(image_dir_path):
        #Estrazione class_id
        class_id = image.split('.')[0].split('_')[0]
        #Lookup su labels df
        target_label = labels_df[labels_df['id'] == class_id] #=> pass to tf.nn.one_hot
        #Estrazione del label
        target_label = target_label['labels'].values[0]
        train_paths.append((image_dir_path + image, 
                            class_id,
                            image.split('.')[0].split('_')[1],
                            target_label
                           ))
    if idx == 10:
        break
train_df = pd.DataFrame(train_paths, columns=['im_path','class', 'im_class_id', 'target_label'])
print(train_df.count())
train_df.head()


Processing label n09332890
Processing label n04275548
Processing label n02165456
Processing label n03179701
Processing label n04597913
Processing label n01855672
Processing label n02129165
Processing label n07720875
Processing label n03733131
Processing label n02950826
Processing label n01644900
im_path         5500
class           5500
im_class_id     5500
target_label    5500
dtype: int64
Out[9]:
im_path class im_class_id target_label
0 /home/carnd/transfer-learning-utils/tiny-image... n09332890 347 lakeside, lakeshore
1 /home/carnd/transfer-learning-utils/tiny-image... n09332890 188 lakeside, lakeshore
2 /home/carnd/transfer-learning-utils/tiny-image... n09332890 16 lakeside, lakeshore
3 /home/carnd/transfer-learning-utils/tiny-image... n09332890 116 lakeside, lakeshore
4 /home/carnd/transfer-learning-utils/tiny-image... n09332890 61 lakeside, lakeshore

Pulizia delle immagini che non sono nel formato desiderato da inception_preprocessing (3 canali). Operazione lunga!


In [10]:
#Remove black and white images
uncorrect_images = 0
#Salvataggio indici di immagini da eliminare
to_remove_indexes = []
for idx, record in train_df.iterrows():
    #Leggo immagine come np.array
    im_array = np.array(Image.open(record['im_path']))
    #Se non ha 3 canali la aggiungo a quelle da eliminare
    if im_array.shape[-1] != 3:
        uncorrect_images += 1
        to_remove_indexes.append(idx)
    if idx % 20 == 0:
        sys.stdout.write("\rProcessed {0} images".format(idx))
        sys.stdout.flush()

#Rimozione righe identificate
train_df = train_df.drop(train_df.index[to_remove_indexes])

print("New size: {0}".format(len(train_df)))
print("Removed {0} images".format(uncorrect_images))


Processed 5480 imagesNew size: 5434
Removed 66 images

In [11]:
#Eventuale campionamento da passare al generatore input
example_file_list = list(train_df.im_path)
print(len(example_file_list))


5434

Definizione dizionario dei labels {label: indice}


In [12]:
labels_dict = {}
unique_labels = set(labels_df['labels'])
for idx, target in enumerate(unique_labels):
    labels_dict[target] = idx
num_classes = len(labels_dict)
num_classes


Out[12]:
76003

Costruzione lista dei label (stesso ordine della lista di file)


In [13]:
example_label_list = []
for idx, value in train_df.iterrows():
    example_label_list.append(labels_dict[value['target_label']])
len(example_label_list)


Out[13]:
5434

In [14]:
num_classes = len(set(example_label_list))
num_classes


Out[14]:
11

In [15]:
reducted_label_dict = {}
for idx,value in enumerate(set(example_label_list)):
    reducted_label_dict[value] = idx
for idx,label in enumerate(example_label_list):
    example_label_list[idx] = reducted_label_dict[label]

Transfer Learning

Ripristino Inception v4 model


In [16]:
'''
get_network_fn for returning the corresponding network function.

Se num_classes è da cambiare, impostare is_training a True

Ritorna la funzione definita nel corrispetivo file della rete
'''
model_name = 'inception_v4'
inception_net_fn = nets_factory.get_network_fn(model_name,
                                               num_classes=1001,
                                               is_training = False
                                              )
'''
with tf.device('/gpu:0'):
    sampl_input = tf.placeholder(tf.float32, [None, 300,300, 3], name='incpetion_input_placeholder')
    #Invocazione della model fn per la definizione delle variabili della rete
    #Usa questi tensori che sono quelli per i quali passa il modello
    #Necessario per ripristinare il grafo
    print(inception_net_fn(sampl_input))
'''


Out[16]:
"\nwith tf.device('/gpu:0'):\n    sampl_input = tf.placeholder(tf.float32, [None, 300,300, 3], name='incpetion_input_placeholder')\n    #Invocazione della model fn per la definizione delle variabili della rete\n    #Usa questi tensori che sono quelli per i quali passa il modello\n    #Necessario per ripristinare il grafo\n    print(inception_net_fn(sampl_input))\n"

Input pipeline

Definizione della input pipeline al modello TF

NB: La memoria della GPU non va MAI oltre i 100MB!


In [17]:
EPOCHS = 50
BATCH_SIZE = 32
#Serve per capire quando il generatore è passato a batch appartenenti a una nuova epoca 
BATCH_PER_EPOCH = np.ceil(len(example_file_list) / BATCH_SIZE)

def parse_single_image(filename_queue):
    #Dequeue a file name from the file name queue
    #filename, y = filename_queue.dequeue()
    #Non bisogna invocare il dequeue il parametro della funziona è già lo scodamento
    filename, y = filename_queue[0], filename_queue[1]
    #A y manca solo il one-hot
    y = tf.one_hot(y, num_classes)
    #Read image
    raw = tf.read_file(filename)
    #convert in jpg (in GPU!)
    jpeg_image = tf.image.decode_jpeg(raw)
    #Preprocessing with inception preprocessing
    jpeg_image = inception_preprocessing.preprocess_image(jpeg_image, 300, 300, is_training=True)
    return jpeg_image, y
#jpeg_image = parse_single_image(filename_queue)

def get_batch(filenames, labels, batch_size, num_epochs=None):
    
    #Coda lettura file, slice_input_producer accetta una lista di liste (stessa dimensione)
    #Risultato dello scodamento è l'elemento corrente di ciascuna delle liste
    #Le liste sono rispettivamente la lista di file e la lista dei label
    filename_queue = tf.train.slice_input_producer([filenames, labels])
    
    #Lettura singolo record
    jpeg_image,y = parse_single_image(filename_queue)
    
    # min_after_dequeue defines how big a buffer we will randomly sample
    #   from -- bigger means better shuffling but slower start up and more
    #   memory used.
    # capacity must be larger than min_after_dequeue and the amount larger
    #   determines the maximum we will prefetch.  Recommendation:
    #   min_after_dequeue + (num_threads + a small safety margin) * batch_size
    min_after_dequeue = 10
    capacity = min_after_dequeue + 3 * batch_size
    
    #tensors è la lista dei tensori delle single feature e immagini. Esegue batch_size volte i tensori example e label per ottenere il batch
    #num_threads incrementa effettivamente l'utilizzo della CPU (confermato dal throughput visisible sul cloudera manager,
    #resta comunque un throughput lento ....
    example_batch = tf.train.shuffle_batch(
        tensors=[jpeg_image, y], batch_size=batch_size, capacity=capacity,
        min_after_dequeue=min_after_dequeue, allow_smaller_final_batch=True, num_threads=2)
    
    return example_batch


#TF Graph, per ora recupera solamente un batch
with tf.device('/cpu:0'):
    with tf.name_scope('preprocessing') as scope:
        x,y = get_batch(example_file_list, example_label_list, batch_size=BATCH_SIZE)
        #x = tf.contrib.layers.flatten(x)

with tf.device('/gpu:0'):
    #inception prelogits 
    inception_net_fn(x)
    #prelogits = tf.placeholder(tf.float32, [None, 1536], name='prelogits_placeholder')
    prelogits = tf.get_default_graph().get_tensor_by_name("InceptionV4/Logits/PreLogitsFlatten/Reshape:0") 

with tf.device('/gpu:0'):
    with tf.variable_scope('trainable'):
        '''with tf.variable_scope('hidden') as scope:
            hidden = tf.layers.dense(
                prelogits,
                units=128,
                activation=tf.nn.relu        
            )'''

        #Kenerl init None = glooroot initializers (sttdev = 1/sqrt(n))
        with tf.variable_scope('readout') as scope:
            output = tf.layers.dense(
                prelogits,
                units=num_classes,
                activation=None
            )

    with tf.variable_scope('train_op') as scope:
        # Define loss and optimizer
        targetvars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "trainable")
        cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y))
        optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost, var_list=targetvars)
        # Accuracy
        correct_pred = tf.equal(tf.argmax(output, 1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

tf.summary.scalar('accuracy', accuracy)
tf.summary.scalar('loss', cost)

init = tf. global_variables_initializer()

merged_summeries = tf.summary.merge_all()


INFO:tensorflow:Scale of 0 disables regularizer.

In [18]:
#GPU config
config = tf.ConfigProto(log_device_placement=True)
config.gpu_options.allow_growth = True
#Saver per restoring inception net
saver = tf.train.Saver()

with tf.Session(config=config) as sess:
    sess.run(init)
    writer = tf.summary.FileWriter(TRAINING_CHECKPOINT_DIR,
                                   sess.graph)
    #Start populating the filename queue.
    coord = tf.train.Coordinator()
    #Senza questa chiamata non partono i thread per popolare la coda che permette di eseguire la read
    threads = tf.train.start_queue_runners(coord=coord)
    #Current epoch and step servono a capire quando cambiare epoca e quando fermarsi
    current_epoch = 0
    current_step = 0
    while current_epoch < EPOCHS: 
        x_batch, y_batch = sess.run([x,y])
        #Forward pass nella incpetion net
        #inception_pre_logits = sess.run(tf.get_default_graph().get_tensor_by_name("InceptionV4/Logits/PreLogitsFlatten/Reshape:0"),
         #feed_dict={sampl_input: x_batch})
        sess.run(optimizer, feed_dict={x: x_batch, y: y_batch})
        #print(x_batch.shape)
        if current_step % 10 == 0:
            #print("Batch shape {}".format(x_batch.shape))
            print("Current step: {0}".format(current_step))
            train_loss, train_accuracy, train_summ  = sess.run([cost,accuracy,merged_summeries],
                                                               feed_dict={x: x_batch, y: y_batch})
            print("Loss: {0} accuracy {1}".format(train_loss, train_accuracy))
            writer.add_summary(train_summ, current_epoch * current_step + 1)
        #Cambiare epoca, raggiunto il massimo per l'epoca corrente
        if current_step == (BATCH_PER_EPOCH - 1):
            current_epoch += 1
            current_step = 0
            print("EPOCH {0}".format(current_epoch))
        #Epoche terminate -> chiudere
        if current_epoch >= EPOCHS:
            break

        if current_step == 0 and current_epoch == 0:
            writer.add_graph(sess.graph)
        #train_summary = sess.run([merged_summeries], feed_dict={x: x_batch, y: y_batch})
        #writer.add_summary(train_summary, current_step)
        current_step +=  1
    #for i in range(10):
        #converted_im = sess.run(jpeg_image)
        #print(converted_im.shape)
        
    #Chiusura del coordinator (chiudi i thread di lettura)
    coord.request_stop()
    coord.join(threads)
    sess.close()


Current step: 0
Loss: 2.5129287242889404 accuracy 0.1875
Current step: 10
Loss: 2.4275832176208496 accuracy 0.125
Current step: 20
Loss: 2.340498685836792 accuracy 0.1875
Current step: 30
Loss: 2.4241528511047363 accuracy 0.125
Current step: 40
Loss: 2.3813867568969727 accuracy 0.1875
Current step: 50
Loss: 2.468667507171631 accuracy 0.0625
Current step: 60
Loss: 2.4105844497680664 accuracy 0.0625
Current step: 70
Loss: 2.4713330268859863 accuracy 0.03125
Current step: 80
Loss: 2.482147693634033 accuracy 0.0625
Current step: 90
Loss: 2.4112768173217773 accuracy 0.09375
Current step: 100
Loss: 2.489518404006958 accuracy 0.0
Current step: 110
Loss: 2.445589542388916 accuracy 0.09375
Current step: 120
Loss: 2.6067144870758057 accuracy 0.09375
Current step: 130
Loss: 2.341503620147705 accuracy 0.21875
Current step: 140
Loss: 2.481048583984375 accuracy 0.0
Current step: 150
Loss: 2.4477105140686035 accuracy 0.09375
Current step: 160
Loss: 2.3633198738098145 accuracy 0.28125
EPOCH 1
Current step: 10
Loss: 2.294314384460449 accuracy 0.21875
Current step: 20
Loss: 2.3068158626556396 accuracy 0.125
Current step: 30
Loss: 2.3125362396240234 accuracy 0.125
Current step: 40
Loss: 2.427682399749756 accuracy 0.0625
Current step: 50
Loss: 2.479543685913086 accuracy 0.0625
Current step: 60
Loss: 2.376668930053711 accuracy 0.09375
Current step: 70
Loss: 2.374061107635498 accuracy 0.15625
Current step: 80
Loss: 2.445662021636963 accuracy 0.125
Current step: 90
Loss: 2.4164578914642334 accuracy 0.125
Current step: 100
Loss: 2.4573426246643066 accuracy 0.0625
Current step: 110
Loss: 2.3238766193389893 accuracy 0.1875
Current step: 120
Loss: 2.460556983947754 accuracy 0.0
Current step: 130
Loss: 2.3908538818359375 accuracy 0.03125
Current step: 140
Loss: 2.3557121753692627 accuracy 0.15625
Current step: 150
Loss: 2.4666881561279297 accuracy 0.0625
Current step: 160
Loss: 2.379157781600952 accuracy 0.21875
EPOCH 2
Current step: 10
Loss: 2.359149932861328 accuracy 0.0625
Current step: 20
Loss: 2.3096489906311035 accuracy 0.09375
Current step: 30
Loss: 2.355727434158325 accuracy 0.21875
Current step: 40
Loss: 2.3416519165039062 accuracy 0.25
Current step: 50
Loss: 2.3870017528533936 accuracy 0.1875
Current step: 60
Loss: 2.5441994667053223 accuracy 0.125
Current step: 70
Loss: 2.3084983825683594 accuracy 0.25
Current step: 80
Loss: 2.3421103954315186 accuracy 0.15625
Current step: 90
Loss: 2.2785158157348633 accuracy 0.15625
Current step: 100
Loss: 2.354095935821533 accuracy 0.21875
Current step: 110
Loss: 2.3940815925598145 accuracy 0.0625
Current step: 120
Loss: 2.3165218830108643 accuracy 0.25
Current step: 130
Loss: 2.4093031883239746 accuracy 0.125
Current step: 140
Loss: 2.2845211029052734 accuracy 0.1875
Current step: 150
Loss: 2.4306275844573975 accuracy 0.1875
Current step: 160
Loss: 2.368978261947632 accuracy 0.09375
EPOCH 3
Current step: 10
Loss: 2.472738742828369 accuracy 0.125
Current step: 20
Loss: 2.427647590637207 accuracy 0.125
Current step: 30
Loss: 2.2375922203063965 accuracy 0.3125
Current step: 40
Loss: 2.3464789390563965 accuracy 0.1875
Current step: 50
Loss: 2.3891234397888184 accuracy 0.15625
Current step: 60
Loss: 2.335573196411133 accuracy 0.1875
Current step: 70
Loss: 2.3575267791748047 accuracy 0.0625
Current step: 80
Loss: 2.3571970462799072 accuracy 0.09375
Current step: 90
Loss: 2.2845168113708496 accuracy 0.09375
Current step: 100
Loss: 2.5006208419799805 accuracy 0.15625
Current step: 110
Loss: 2.280369281768799 accuracy 0.1875
Current step: 120
Loss: 2.3990046977996826 accuracy 0.125
Current step: 130
Loss: 2.3285584449768066 accuracy 0.125
Current step: 140
Loss: 2.426931858062744 accuracy 0.09375
Current step: 150
Loss: 2.415464401245117 accuracy 0.125
Current step: 160
Loss: 2.4257497787475586 accuracy 0.0625
EPOCH 4
Current step: 10
Loss: 2.3156418800354004 accuracy 0.1875
Current step: 20
Loss: 2.364827871322632 accuracy 0.125
Current step: 30
Loss: 2.254568099975586 accuracy 0.21875
Current step: 40
Loss: 2.487118721008301 accuracy 0.03125
Current step: 50
Loss: 2.372596263885498 accuracy 0.09375
Current step: 60
Loss: 2.403766393661499 accuracy 0.09375
Current step: 70
Loss: 2.337301254272461 accuracy 0.1875
Current step: 80
Loss: 2.20710825920105 accuracy 0.25
Current step: 90
Loss: 2.234921932220459 accuracy 0.15625
Current step: 100
Loss: 2.398662567138672 accuracy 0.125
Current step: 110
Loss: 2.4561405181884766 accuracy 0.09375
Current step: 120
Loss: 2.3186514377593994 accuracy 0.1875
Current step: 130
Loss: 2.4015278816223145 accuracy 0.15625
Current step: 140
Loss: 2.374873399734497 accuracy 0.125
Current step: 150
Loss: 2.2621021270751953 accuracy 0.15625
Current step: 160
Loss: 2.344041585922241 accuracy 0.15625
EPOCH 5
Current step: 10
Loss: 2.3828701972961426 accuracy 0.1875
Current step: 20
Loss: 2.304605722427368 accuracy 0.09375
Current step: 30
Loss: 2.376924514770508 accuracy 0.15625
Current step: 40
Loss: 2.2680609226226807 accuracy 0.25
Current step: 50
Loss: 2.322700023651123 accuracy 0.09375
Current step: 60
Loss: 2.366103172302246 accuracy 0.15625
Current step: 70
Loss: 2.211580514907837 accuracy 0.34375
Current step: 80
Loss: 2.3299784660339355 accuracy 0.0625
Current step: 90
Loss: 2.3758983612060547 accuracy 0.125
Current step: 100
Loss: 2.3659043312072754 accuracy 0.125
Current step: 110
Loss: 2.323573112487793 accuracy 0.15625
Current step: 120
Loss: 2.282770872116089 accuracy 0.25
Current step: 130
Loss: 2.3762335777282715 accuracy 0.1875
Current step: 140
Loss: 2.3269357681274414 accuracy 0.1875
Current step: 150
Loss: 2.3897228240966797 accuracy 0.0625
Current step: 160
Loss: 2.3257369995117188 accuracy 0.125
EPOCH 6
Current step: 10
Loss: 2.354898452758789 accuracy 0.125
Current step: 20
Loss: 2.448961019515991 accuracy 0.28125
Current step: 30
Loss: 2.300652027130127 accuracy 0.09375
Current step: 40
Loss: 2.298259973526001 accuracy 0.125
Current step: 50
Loss: 2.3140764236450195 accuracy 0.1875
Current step: 60
Loss: 2.3086957931518555 accuracy 0.1875
Current step: 70
Loss: 2.2589874267578125 accuracy 0.25
Current step: 80
Loss: 2.2504522800445557 accuracy 0.25
Current step: 90
Loss: 2.348738670349121 accuracy 0.0625
Current step: 100
Loss: 2.3510100841522217 accuracy 0.1875
Current step: 110
Loss: 2.4643232822418213 accuracy 0.125
Current step: 120
Loss: 2.388218879699707 accuracy 0.125
Current step: 130
Loss: 2.311257839202881 accuracy 0.15625
Current step: 140
Loss: 2.293039321899414 accuracy 0.125
Current step: 150
Loss: 2.3351492881774902 accuracy 0.125
Current step: 160
Loss: 2.3422253131866455 accuracy 0.1875
EPOCH 7
Current step: 10
Loss: 2.1799423694610596 accuracy 0.3125
Current step: 20
Loss: 2.362262010574341 accuracy 0.1875
Current step: 30
Loss: 2.303514003753662 accuracy 0.21875
Current step: 40
Loss: 2.3527729511260986 accuracy 0.21875
Current step: 50
Loss: 2.3132541179656982 accuracy 0.3125
Current step: 60
Loss: 2.435335636138916 accuracy 0.125
Current step: 70
Loss: 2.4095048904418945 accuracy 0.15625
Current step: 80
Loss: 2.3749773502349854 accuracy 0.125
Current step: 90
Loss: 2.3189139366149902 accuracy 0.28125
Current step: 100
Loss: 2.309497833251953 accuracy 0.1875
Current step: 110
Loss: 2.321743965148926 accuracy 0.25
Current step: 120
Loss: 2.2805705070495605 accuracy 0.1875
Current step: 130
Loss: 2.3229756355285645 accuracy 0.1875
Current step: 140
Loss: 2.268439769744873 accuracy 0.25
Current step: 150
Loss: 2.214064121246338 accuracy 0.28125
Current step: 160
Loss: 2.4231338500976562 accuracy 0.15625
EPOCH 8
Current step: 10
Loss: 2.359771251678467 accuracy 0.09375
Current step: 20
Loss: 2.344373941421509 accuracy 0.15625
Current step: 30
Loss: 2.2594761848449707 accuracy 0.28125
Current step: 40
Loss: 2.378200054168701 accuracy 0.1875
Current step: 50
Loss: 2.2981276512145996 accuracy 0.25
Current step: 60
Loss: 2.28592586517334 accuracy 0.25
Current step: 70
Loss: 2.288090229034424 accuracy 0.21875
Current step: 80
Loss: 2.3732314109802246 accuracy 0.15625
Current step: 90
Loss: 2.439406394958496 accuracy 0.125
Current step: 100
Loss: 2.4344420433044434 accuracy 0.03125
Current step: 110
Loss: 2.3646481037139893 accuracy 0.21875
Current step: 120
Loss: 2.4472577571868896 accuracy 0.125
Current step: 130
Loss: 2.2674660682678223 accuracy 0.15625
Current step: 140
Loss: 2.2220616340637207 accuracy 0.25
Current step: 150
Loss: 2.273571729660034 accuracy 0.125
Current step: 160
Loss: 2.2972211837768555 accuracy 0.15625
EPOCH 9
Current step: 10
Loss: 2.450946807861328 accuracy 0.0625
Current step: 20
Loss: 2.4767351150512695 accuracy 0.125
Current step: 30
Loss: 2.3627560138702393 accuracy 0.21875
Current step: 40
Loss: 2.2345151901245117 accuracy 0.15625
Current step: 50
Loss: 2.212468385696411 accuracy 0.3125
Current step: 60
Loss: 2.2985291481018066 accuracy 0.15625
Current step: 70
Loss: 2.294308662414551 accuracy 0.1875
Current step: 80
Loss: 2.212893009185791 accuracy 0.1875
Current step: 90
Loss: 2.238955020904541 accuracy 0.28125
Current step: 100
Loss: 2.2765023708343506 accuracy 0.15625
Current step: 110
Loss: 2.3384156227111816 accuracy 0.15625
Current step: 120
Loss: 2.226191997528076 accuracy 0.25
Current step: 130
Loss: 2.3575057983398438 accuracy 0.0625
Current step: 140
Loss: 2.402097225189209 accuracy 0.125
Current step: 150
Loss: 2.3285739421844482 accuracy 0.03125
Current step: 160
Loss: 2.277543306350708 accuracy 0.09375
EPOCH 10
Current step: 10
Loss: 2.3629469871520996 accuracy 0.09375
Current step: 20
Loss: 2.286078691482544 accuracy 0.15625
Current step: 30
Loss: 2.2258248329162598 accuracy 0.25
Current step: 40
Loss: 2.4026131629943848 accuracy 0.09375
Current step: 50
Loss: 2.23561429977417 accuracy 0.21875
Current step: 60
Loss: 2.2282185554504395 accuracy 0.25
Current step: 70
Loss: 2.2993125915527344 accuracy 0.25
Current step: 80
Loss: 2.4143424034118652 accuracy 0.0625
Current step: 90
Loss: 2.3338499069213867 accuracy 0.1875
Current step: 100
Loss: 2.246335983276367 accuracy 0.0625
Current step: 110
Loss: 2.541498899459839 accuracy 0.15625
Current step: 120
Loss: 2.442999839782715 accuracy 0.09375
Current step: 130
Loss: 2.308549404144287 accuracy 0.1875
Current step: 140
Loss: 2.216637134552002 accuracy 0.34375
Current step: 150
Loss: 2.3202624320983887 accuracy 0.09375
Current step: 160
Loss: 2.3183093070983887 accuracy 0.21875
EPOCH 11
Current step: 10
Loss: 2.402116298675537 accuracy 0.25
Current step: 20
Loss: 2.2492332458496094 accuracy 0.15625
Current step: 30
Loss: 2.3395819664001465 accuracy 0.1875
Current step: 40
Loss: 2.3580455780029297 accuracy 0.0625
Current step: 50
Loss: 2.3136253356933594 accuracy 0.09375
Current step: 60
Loss: 2.244309663772583 accuracy 0.21875
Current step: 70
Loss: 2.388603687286377 accuracy 0.1875
Current step: 80
Loss: 2.4558846950531006 accuracy 0.09375
Current step: 90
Loss: 2.354668140411377 accuracy 0.28125
Current step: 100
Loss: 2.2838611602783203 accuracy 0.1875
Current step: 110
Loss: 2.2936065196990967 accuracy 0.1875
Current step: 120
Loss: 2.3059029579162598 accuracy 0.15625
Current step: 130
Loss: 2.263805866241455 accuracy 0.1875
Current step: 140
Loss: 2.2682576179504395 accuracy 0.28125
Current step: 150
Loss: 2.323241949081421 accuracy 0.0625
Current step: 160
Loss: 2.1801295280456543 accuracy 0.25
EPOCH 12
Current step: 10
Loss: 2.246455192565918 accuracy 0.25
Current step: 20
Loss: 2.202493190765381 accuracy 0.25
Current step: 30
Loss: 2.3593955039978027 accuracy 0.15625
Current step: 40
Loss: 2.27852463722229 accuracy 0.28125
Current step: 50
Loss: 2.1962804794311523 accuracy 0.1875
Current step: 60
Loss: 2.3841493129730225 accuracy 0.21875
Current step: 70
Loss: 2.3280534744262695 accuracy 0.09375
Current step: 80
Loss: 2.2271628379821777 accuracy 0.28125
Current step: 90
Loss: 2.315380811691284 accuracy 0.34375
Current step: 100
Loss: 2.3014025688171387 accuracy 0.125
Current step: 110
Loss: 2.293855667114258 accuracy 0.1875
Current step: 120
Loss: 2.2320992946624756 accuracy 0.21875
Current step: 130
Loss: 2.3359532356262207 accuracy 0.21875
Current step: 140
Loss: 2.1392955780029297 accuracy 0.3125
Current step: 150
Loss: 2.2693519592285156 accuracy 0.15625
Current step: 160
Loss: 2.2756972312927246 accuracy 0.25
EPOCH 13
Current step: 10
Loss: 2.364438772201538 accuracy 0.125
Current step: 20
Loss: 2.1814565658569336 accuracy 0.15625
Current step: 30
Loss: 2.2269182205200195 accuracy 0.1875
Current step: 40
Loss: 2.2934517860412598 accuracy 0.1875
Current step: 50
Loss: 2.247753143310547 accuracy 0.15625
Current step: 60
Loss: 2.2071595191955566 accuracy 0.21875
Current step: 70
Loss: 2.3171863555908203 accuracy 0.25
Current step: 80
Loss: 2.2840576171875 accuracy 0.21875
Current step: 90
Loss: 2.306089401245117 accuracy 0.0625
Current step: 100
Loss: 2.358604669570923 accuracy 0.09375
Current step: 110
Loss: 2.1330206394195557 accuracy 0.21875
Current step: 120
Loss: 2.2403645515441895 accuracy 0.21875
Current step: 130
Loss: 2.33551025390625 accuracy 0.15625
Current step: 140
Loss: 2.220452308654785 accuracy 0.1875
Current step: 150
Loss: 2.2927963733673096 accuracy 0.25
Current step: 160
Loss: 2.296999454498291 accuracy 0.1875
EPOCH 14
Current step: 10
Loss: 2.2636847496032715 accuracy 0.1875
Current step: 20
Loss: 2.3421075344085693 accuracy 0.15625
Current step: 30
Loss: 2.1184639930725098 accuracy 0.28125
Current step: 40
Loss: 2.3807973861694336 accuracy 0.125
Current step: 50
Loss: 2.3892171382904053 accuracy 0.21875
Current step: 60
Loss: 2.2767868041992188 accuracy 0.25
Current step: 70
Loss: 2.1882028579711914 accuracy 0.21875
Current step: 80
Loss: 2.2089591026306152 accuracy 0.28125
Current step: 90
Loss: 2.2852425575256348 accuracy 0.28125
Current step: 100
Loss: 2.2170445919036865 accuracy 0.34375
Current step: 110
Loss: 2.277245044708252 accuracy 0.25
Current step: 120
Loss: 2.4141268730163574 accuracy 0.1875
Current step: 130
Loss: 2.3300564289093018 accuracy 0.21875
Current step: 140
Loss: 2.1189913749694824 accuracy 0.375
Current step: 150
Loss: 2.0822830200195312 accuracy 0.3125
Current step: 160
Loss: 2.196434736251831 accuracy 0.1875
EPOCH 15
Current step: 10
Loss: 2.4302244186401367 accuracy 0.09375
Current step: 20
Loss: 2.410517454147339 accuracy 0.125
Current step: 30
Loss: 2.442936897277832 accuracy 0.0
Current step: 40
Loss: 2.2618093490600586 accuracy 0.21875
Current step: 50
Loss: 2.225010395050049 accuracy 0.25
Current step: 60
Loss: 2.366292953491211 accuracy 0.0625
Current step: 70
Loss: 2.2814345359802246 accuracy 0.03125
Current step: 80
Loss: 2.2059459686279297 accuracy 0.25
Current step: 90
Loss: 2.3246161937713623 accuracy 0.25
Current step: 100
Loss: 2.2415621280670166 accuracy 0.1875
Current step: 110
Loss: 2.3221468925476074 accuracy 0.15625
Current step: 120
Loss: 2.1420323848724365 accuracy 0.3125
Current step: 130
Loss: 2.262669324874878 accuracy 0.21875
Current step: 140
Loss: 2.2032644748687744 accuracy 0.21875
Current step: 150
Loss: 2.3280534744262695 accuracy 0.1875
Current step: 160
Loss: 2.3078768253326416 accuracy 0.28125
EPOCH 16
Current step: 10
Loss: 2.3515615463256836 accuracy 0.21875
Current step: 20
Loss: 2.36397123336792 accuracy 0.1875
Current step: 30
Loss: 2.1354308128356934 accuracy 0.3125
Current step: 40
Loss: 2.353476047515869 accuracy 0.0625
Current step: 50
Loss: 2.212703227996826 accuracy 0.25
Current step: 60
Loss: 2.308811664581299 accuracy 0.1875
Current step: 70
Loss: 2.2962093353271484 accuracy 0.25
Current step: 80
Loss: 2.425044059753418 accuracy 0.09375
Current step: 90
Loss: 2.1793198585510254 accuracy 0.25
Current step: 100
Loss: 2.1629090309143066 accuracy 0.3125
Current step: 110
Loss: 2.359266757965088 accuracy 0.25
Current step: 120
Loss: 2.3109078407287598 accuracy 0.1875
Current step: 130
Loss: 2.3592312335968018 accuracy 0.25
Current step: 140
Loss: 2.1329619884490967 accuracy 0.21875
Current step: 150
Loss: 2.26998233795166 accuracy 0.125
Current step: 160
Loss: 2.156911611557007 accuracy 0.21875
EPOCH 17
Current step: 10
Loss: 2.22196364402771 accuracy 0.1875
Current step: 20
Loss: 2.339437484741211 accuracy 0.125
Current step: 30
Loss: 2.3673462867736816 accuracy 0.1875
Current step: 40
Loss: 2.4124598503112793 accuracy 0.125
Current step: 50
Loss: 2.257690906524658 accuracy 0.3125
Current step: 60
Loss: 2.302187204360962 accuracy 0.15625
Current step: 70
Loss: 2.139050006866455 accuracy 0.28125
Current step: 80
Loss: 2.273402214050293 accuracy 0.28125
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.CancelledError'>, Enqueue operation was cancelled
	 [[Node: preprocessing/input_producer/input_producer/input_producer_EnqueueMany = QueueEnqueueManyV2[Tcomponents=[DT_INT32], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](preprocessing/input_producer/input_producer, preprocessing/input_producer/input_producer/RandomShuffle)]]
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-18-5e3fbecdf23f> in <module>()
     21         #inception_pre_logits = sess.run(tf.get_default_graph().get_tensor_by_name("InceptionV4/Logits/PreLogitsFlatten/Reshape:0"),
     22          #feed_dict={sampl_input: x_batch})
---> 23         sess.run(optimizer, feed_dict={x: x_batch, y: y_batch})
     24         #print(x_batch.shape)
     25         if current_step % 10 == 0:

/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    787     try:
    788       result = self._run(None, fetches, feed_dict, options_ptr,
--> 789                          run_metadata_ptr)
    790       if run_metadata:
    791         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
    995     if final_fetches or final_targets:
    996       results = self._do_run(handle, final_targets, final_fetches,
--> 997                              feed_dict_string, options, run_metadata)
    998     else:
    999       results = []

/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1130     if handle is None:
   1131       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
-> 1132                            target_list, options, run_metadata)
   1133     else:
   1134       return self._do_call(_prun_fn, self._session, handle, feed_dict,

/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1137   def _do_call(self, fn, *args):
   1138     try:
-> 1139       return fn(*args)
   1140     except errors.OpError as e:
   1141       message = compat.as_text(e.message)

/home/carnd/anaconda3/envs/dl/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1119         return tf_session.TF_Run(session, options,
   1120                                  feed_dict, fetch_list, target_list,
-> 1121                                  status, run_metadata)
   1122 
   1123     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

In [25]:
tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, "trainable")


Out[25]:
[<tf.Variable 'trainable/dense/kernel:0' shape=(1536, 128) dtype=float32_ref>,
 <tf.Variable 'trainable/dense/bias:0' shape=(128,) dtype=float32_ref>,
 <tf.Variable 'trainable/dense_1/kernel:0' shape=(128, 6) dtype=float32_ref>,
 <tf.Variable 'trainable/dense_1/bias:0' shape=(6,) dtype=float32_ref>]

In [ ]: