A simple exploration notebook to get some insights about the data.

As per NDA, sample photos are confidential and also it says you cannot disclose confidential information without written consent from the Sponsors. More about NDA on this forum post. Thank you Alan for pointing it out to me.

So here is the revised version of the exploration notebook where the animation part is commented.

Please uncomment the Animation part of the notebook and then run it in the local for animation

Objective:

In this competition, The Nature Conservancy asks you to help them detect which species of fish appears on a fishing boat, based on images captured from boat cameras of various angles.

Your goal is to predict the likelihood of fish species in each picture.

As mentioned in the data page, there are eight target categories available in the dataset.

  1. Albacore tuna
  2. Bigeye tuna
  3. Yellowfin tuna
  4. Mahi Mahi
  5. Opah
  6. Sharks
  7. Other (meaning that there are fish present but not in the above categories)
  8. No Fish (meaning that no fish is in the picture)

Important points to note:

  1. Pre-trained models and external data are allowed in the competition, but need to be posted on this official forum thread
  2. The competition comprises of two stages. Test data for second stage will be released in the last week.

First let us see the number of image files present for each of the species


In [1]:
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in 

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from scipy.misc import imread
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline

from subprocess import check_output
print(check_output(["ls", "../input/train/"]).decode("utf8"))


ALB
BET
DOL
LAG
NoF
OTHER
SHARK
YFT

So there are 8 folders present inside the train folder, one for each species.

Now let us check the number of files present in each of these sub folders.


In [2]:
sub_folders = check_output(["ls", "../input/train/"]).decode("utf8").strip().split('\n')
count_dict = {}
for sub_folder in sub_folders:
    num_of_files = len(check_output(["ls", "../input/train/"+sub_folder]).decode("utf8").strip().split('\n'))
    print("Number of files for the species",sub_folder,":",num_of_files)
    count_dict[sub_folder] = num_of_files
    
plt.figure(figsize=(12,4))
sns.barplot(list(count_dict.keys()), list(count_dict.values()), alpha=0.8)
plt.xlabel('Fish Species', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.show()


Number of files for the species ALB : 1719
Number of files for the species BET : 200
Number of files for the species DOL : 117
Number of files for the species LAG : 67
Number of files for the species NoF : 465
Number of files for the species OTHER : 299
Number of files for the species SHARK : 176
Number of files for the species YFT : 734

So the number of files for species ALB (Albacore tuna) is much higher than other species.

Let us look at the number of files present in the test folder.


In [3]:
num_test_files = len(check_output(["ls", "../input/test_stg1/"]).decode("utf8").strip().split('\n'))
print("Number of test files present :", num_test_files)


Number of test files present : 1000

Image Size:

Now let us look at the image size of each of the files and see what different sizes are available.


In [4]:
train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for sub_folder in sub_folders:
    file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
    for file_name in file_names:
        im_array = imread(train_path+sub_folder+"/"+file_name)
        size = "_".join(map(str,list(im_array.shape)))
        different_file_sizes[size] = different_file_sizes.get(size,0) + 1

plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('Image size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.title("Image size present in train dataset")
plt.xticks(rotation='vertical')
plt.show()


So 720_1280_3 is the most common image size available in the train data and 10 different sizes are available.

720_1244_3 is the smallest size of the available images in train set and 974_1732_3 is the largest one.

Now let us look at the distribution in test dataset as well.


In [5]:
test_path = "../input/test_stg1/"
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
for file_name in file_names:
        size = "_".join(map(str,list(imread(test_path+file_name).shape)))
        different_file_sizes[size] = different_file_sizes.get(size,0) + 1

plt.figure(figsize=(12,4))
sns.barplot(list(different_file_sizes.keys()), list(different_file_sizes.values()), alpha=0.8)
plt.xlabel('File size', fontsize=12)
plt.ylabel('Number of Images', fontsize=12)
plt.xticks(rotation='vertical')
plt.title("Image size present in test dataset")
plt.show()


Test set also has a very similar distribution.

Animation:

Let us try to have some animation on the available images. Not able to embed the video in the notebook.

Please uncomment the following part of the code and run it in local for animation


In [6]:
"""
import random
import matplotlib.animation as animation
from matplotlib import animation, rc
from IPython.display import HTML

random.seed(12345)
train_path = "../input/train/"
sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
different_file_sizes = {}
all_files = []
for sub_folder in sub_folders:
    file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
    selected_files = random.sample(file_names, 10)
    for file_name in selected_files:
        all_files.append([sub_folder,file_name])

fig = plt.figure()
sns.set_style("whitegrid", {'axes.grid' : False})
img_file = "".join([train_path, sub_folder, "/", file_name])
im = plt.imshow(imread(img_file), vmin=0, vmax=255)

def updatefig(ind):
    sub_folder = all_files[ind][0]
    file_name = all_files[ind][1]
    img_file = "".join([train_path, sub_folder, "/", file_name])
    im.set_array(imread(img_file))
    plt.title("Species : "+sub_folder, fontsize=15)
    return im,

ani = animation.FuncAnimation(fig, updatefig, frames=len(all_files))
ani.save('lb.gif', fps=1, writer='imagemagick')
#rc('animation', html='html5')
#HTML(ani.to_html5_video())
plt.show()
"""


Out[6]:
'\nimport random\nimport matplotlib.animation as animation\nfrom matplotlib import animation, rc\nfrom IPython.display import HTML\n\nrandom.seed(12345)\ntrain_path = "../input/train/"\nsub_folders = check_output(["ls", train_path]).decode("utf8").strip().split(\'\n\')\ndifferent_file_sizes = {}\nall_files = []\nfor sub_folder in sub_folders:\n    file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split(\'\n\')\n    selected_files = random.sample(file_names, 10)\n    for file_name in selected_files:\n        all_files.append([sub_folder,file_name])\n\nfig = plt.figure()\nsns.set_style("whitegrid", {\'axes.grid\' : False})\nimg_file = "".join([train_path, sub_folder, "/", file_name])\nim = plt.imshow(imread(img_file), vmin=0, vmax=255)\n\ndef updatefig(ind):\n    sub_folder = all_files[ind][0]\n    file_name = all_files[ind][1]\n    img_file = "".join([train_path, sub_folder, "/", file_name])\n    im.set_array(imread(img_file))\n    plt.title("Species : "+sub_folder, fontsize=15)\n    return im,\n\nani = animation.FuncAnimation(fig, updatefig, frames=len(all_files))\nani.save(\'lb.gif\', fps=1, writer=\'imagemagick\')\n#rc(\'animation\', html=\'html5\')\n#HTML(ani.to_html5_video())\nplt.show()\n'

Basic CNN Model using Keras:

Now let us try to build a CNN model on the dataset. Due to the memory constraints of the kernels, let us take only (500,500,3) array from top left corner of each image and then try to classify based on that portion.

Kindly note that running it offline with the full image will give much better results. This is just a started script I tried and I am a newbie for image classification problems.


In [7]:
import random
from subprocess import check_output
from scipy.misc import imread
import numpy as np
np.random.seed(2016)
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K

batch_size = 1
nb_classes = 8
nb_epoch = 1

img_rows, img_cols, img_rgb = 500, 500, 3
nb_filters = 4
pool_size = (2, 2)
kernel_size = (3, 3)
input_shape = (img_rows, img_cols, 3)

species_map_dict = {
'ALB':0,
'BET':1,
'DOL':2,
'LAG':3,
'NoF':4,
'OTHER':5,
'SHARK':6,
'YFT':7
}

def batch_generator_train(sample_size):
	train_path = "../input/train/"
	all_files = []
	y_values = []
	sub_folders = check_output(["ls", train_path]).decode("utf8").strip().split('\n')
	for sub_folder in sub_folders:
		file_names = check_output(["ls", train_path+sub_folder]).decode("utf8").strip().split('\n')
		for file_name in file_names:
			all_files.append([sub_folder, '/', file_name])
			y_values.append(species_map_dict[sub_folder])
	number_of_images = range(len(all_files))

	counter = 0
	while True:
		image_index = random.choice(number_of_images)
		file_name = "".join([train_path] + all_files[image_index])
		print(file_name)
		y = [0]*8
		y[y_values[image_index]] = 1
		y = np.array(y).reshape(1,8)
		
		im_array = imread(file_name)
		X = np.zeros([1, img_rows, img_cols, img_rgb])
		#X[:im_array.shape[0], :im_array.shape[1], 3] = im_array.copy().astype('float32')
		X[0, :, :, :] = im_array[:500,:500,:].astype('float32')
		X /= 255.
        
		print(X.shape)
		yield X,y
		
		counter += 1
		#if counter == sample_size:
		#	break

def batch_generator_test(all_files):
	for file_name in all_files:
		file_name = test_path + file_name
		
		im_array = imread(file_name)
		X = np.zeros([1, img_rows, img_cols, img_rgb])
		X[0,:, :, :] = im_array[:500,:500,:].astype('float32')
		X /= 255.

		yield X


def keras_cnn_model():
	model = Sequential()
	model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
                        border_mode='valid',
                        input_shape=input_shape))
	model.add(Activation('relu'))
	model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
	model.add(Activation('relu'))
	model.add(MaxPooling2D(pool_size=pool_size))
	model.add(Dropout(0.25))	
	model.add(Flatten())
	model.add(Dense(128))
	model.add(Activation('relu'))
	model.add(Dropout(0.5))
	model.add(Dense(nb_classes))
	model.add(Activation('softmax'))
	model.compile(loss='categorical_crossentropy', optimizer='adadelta')
	return model

model = keras_cnn_model()
fit= model.fit_generator(
	generator = batch_generator_train(100),
	nb_epoch = 1,
	samples_per_epoch = 100
)

test_path = "../input/test_stg1/"
all_files = []
file_names = check_output(["ls", test_path]).decode("utf8").strip().split('\n')
for file_name in file_names:
	all_files.append(file_name)
#preds = model.predict_generator(generator=batch_generator_test(all_files), val_samples=len(all_files))

#out_df = pd.DataFrame(preds)
#out_df.columns = ['ALB', 'BET', 'DOL', 'LAG', 'NoF', 'OTHER', 'SHARK', 'YFT']
#out_df['image'] = all_files
#out_df.to_csv("sample_sub_keras.csv", index=False)


Using TensorFlow backend.
Epoch 1/1
../input/train/OTHER/img_02942.jpg
(1, 500, 500, 3)
../input/train/LAG/img_00784.jpg
(1, 500, 500, 3)
../input/train/NoF/img_00008.jpg
(1, 500, 500, 3)
../input/train/ALB/img_06979.jpg
(1, 500, 500, 3)
../input/train/ALB/img_00356.jpg
(1, 500, 500, 3)
../input/train/ALB/img_02174.jpg
(1, 500, 500, 3)
../input/train/ALB/img_03188.jpg
(1, 500, 500, 3)
../input/train/ALB/img_05648.jpg
(1, 500, 500, 3)
../input/train/ALB/img_05163.jpg
(1, 500, 500, 3)
../input/train/ALB/img_06563.jpg
(1, 500, 500, 3)
../input/train/ALB/img_04140.jpg
(1, 500, 500, 3)
  1/100 [..............................] - ETA: 194s - loss: 2.4752../input/train/ALB/img_06749.jpg
(1, 500, 500, 3)
  2/100 [..............................] - ETA: 146s - loss: 2.2721../input/train/ALB/img_02555.jpg
(1, 500, 500, 3)
  3/100 [..............................] - ETA: 131s - loss: 3.5473../input/train/DOL/img_06579.jpg
(1, 500, 500, 3)
  4/100 [>.............................] - ETA: 121s - loss: 4.3629../input/train/ALB/img_00466.jpg
(1, 500, 500, 3)
  5/100 [>.............................] - ETA: 117s - loss: 3.5295../input/train/BET/img_07311.jpg
(1, 500, 500, 3)
  6/100 [>.............................] - ETA: 112s - loss: 2.9605../input/train/YFT/img_04817.jpg
(1, 500, 500, 3)
  7/100 [=>............................] - ETA: 109s - loss: 2.5376../input/train/NoF/img_03174.jpg
(1, 500, 500, 3)
  8/100 [=>............................] - ETA: 106s - loss: 2.2204../input/train/LAG/img_02042.jpg
(1, 500, 500, 3)
  9/100 [=>............................] - ETA: 105s - loss: 1.9737../input/train/YFT/img_03947.jpg
(1, 500, 500, 3)
 10/100 [==>...........................] - ETA: 103s - loss: 1.7764../input/train/OTHER/img_00359.jpg
(1, 500, 500, 3)
 11/100 [==>...........................] - ETA: 101s - loss: 1.6149../input/train/DOL/img_02341.jpg
(1, 500, 500, 3)
 12/100 [==>...........................] - ETA: 99s - loss: 1.4803 ../input/train/ALB/img_00337.jpg
(1, 500, 500, 3)
 13/100 [==>...........................] - ETA: 97s - loss: 1.3665../input/train/DOL/img_00352.jpg
(1, 500, 500, 3)
 14/100 [===>..........................] - ETA: 96s - loss: 2.4201../input/train/YFT/img_03543.jpg
(1, 500, 500, 3)
 15/100 [===>..........................] - ETA: 94s - loss: 2.2588../input/train/SHARK/img_03351.jpg
(1, 500, 500, 3)
 16/100 [===>..........................] - ETA: 92s - loss: 3.1250../input/train/ALB/img_03424.jpg
(1, 500, 500, 3)
 17/100 [====>.........................] - ETA: 91s - loss: 3.8893../input/train/ALB/img_06553.jpg
(1, 500, 500, 3)
 18/100 [====>.........................] - ETA: 90s - loss: 4.5672../input/train/ALB/img_02452.jpg
(1, 500, 500, 3)
 19/100 [====>.........................] - ETA: 89s - loss: 4.4502../input/train/BET/img_04033.jpg
(1, 500, 500, 3)
 20/100 [=====>........................] - ETA: 88s - loss: 5.0336../input/train/OTHER/img_00359.jpg
(1, 500, 500, 3)
 21/100 [=====>........................] - ETA: 87s - loss: 5.0614../input/train/ALB/img_00482.jpg
(1, 500, 500, 3)
 22/100 [=====>........................] - ETA: 86s - loss: 5.0466../input/train/YFT/img_01199.jpg
(1, 500, 500, 3)
 23/100 [=====>........................] - ETA: 85s - loss: 4.9823../input/train/YFT/img_05655.jpg
(1, 500, 500, 3)
 24/100 [======>.......................] - ETA: 84s - loss: 5.2903../input/train/ALB/img_01225.jpg
(1, 500, 500, 3)
 25/100 [======>.......................] - ETA: 83s - loss: 5.6580../input/train/ALB/img_02036.jpg
(1, 500, 500, 3)
 26/100 [======>.......................] - ETA: 82s - loss: 5.4475../input/train/ALB/img_06726.jpg
(1, 500, 500, 3)
 27/100 [=======>......................] - ETA: 80s - loss: 5.4224../input/train/DOL/img_06671.jpg
(1, 500, 500, 3)
 28/100 [=======>......................] - ETA: 79s - loss: 5.2936../input/train/ALB/img_04899.jpg
(1, 500, 500, 3)
 29/100 [=======>......................] - ETA: 78s - loss: 5.1888../input/train/OTHER/img_00814.jpg
(1, 500, 500, 3)
 30/100 [========>.....................] - ETA: 77s - loss: 5.3194../input/train/ALB/img_07858.jpg
(1, 500, 500, 3)
 31/100 [========>.....................] - ETA: 76s - loss: 5.2152../input/train/DOL/img_00733.jpg
(1, 500, 500, 3)
 32/100 [========>.....................] - ETA: 75s - loss: 5.1686../input/train/NoF/img_01588.jpg
(1, 500, 500, 3)
 33/100 [========>.....................] - ETA: 74s - loss: 5.5004../input/train/SHARK/img_01182.jpg
(1, 500, 500, 3)
 34/100 [=========>....................] - ETA: 72s - loss: 5.7783../input/train/ALB/img_06383.jpg
(1, 500, 500, 3)
 35/100 [=========>....................] - ETA: 71s - loss: 5.6196../input/train/ALB/img_04965.jpg
(1, 500, 500, 3)
 36/100 [=========>....................] - ETA: 70s - loss: 5.5217../input/train/ALB/img_03800.jpg
(1, 500, 500, 3)
 37/100 [==========>...................] - ETA: 69s - loss: 5.4136../input/train/ALB/img_01160.jpg
(1, 500, 500, 3)
 38/100 [==========>...................] - ETA: 68s - loss: 5.3604../input/train/ALB/img_03411.jpg
(1, 500, 500, 3)
 39/100 [==========>...................] - ETA: 67s - loss: 5.2689../input/train/LAG/img_02694.jpg
(1, 500, 500, 3)
 40/100 [===========>..................] - ETA: 66s - loss: 5.2695../input/train/YFT/img_06763.jpg
(1, 500, 500, 3)
 41/100 [===========>..................] - ETA: 65s - loss: 5.1486../input/train/YFT/img_06841.jpg
(1, 500, 500, 3)
 42/100 [===========>..................] - ETA: 63s - loss: 5.1417../input/train/YFT/img_00047.jpg
(1, 500, 500, 3)
 43/100 [===========>..................] - ETA: 62s - loss: 5.0457../input/train/OTHER/img_01356.jpg
(1, 500, 500, 3)
 44/100 [============>.................] - ETA: 61s - loss: 5.1796../input/train/ALB/img_07741.jpg
(1, 500, 500, 3)
 45/100 [============>.................] - ETA: 60s - loss: 5.0949../input/train/SHARK/img_00409.jpg
(1, 500, 500, 3)
 46/100 [============>.................] - ETA: 59s - loss: 5.0313../input/train/ALB/img_03879.jpg
(1, 500, 500, 3)
 47/100 [=============>................] - ETA: 58s - loss: 4.9303../input/train/LAG/img_04741.jpg
(1, 500, 500, 3)
 48/100 [=============>................] - ETA: 57s - loss: 4.8684../input/train/NoF/img_00553.jpg
(1, 500, 500, 3)
 49/100 [=============>................] - ETA: 56s - loss: 4.7713../input/train/ALB/img_06022.jpg
(1, 500, 500, 3)
 50/100 [==============>...............] - ETA: 54s - loss: 4.7566../input/train/LAG/img_01952.jpg
(1, 500, 500, 3)
 51/100 [==============>...............] - ETA: 53s - loss: 4.8281../input/train/SHARK/img_03131.jpg
(1, 500, 500, 3)
 52/100 [==============>...............] - ETA: 52s - loss: 4.8215../input/train/BET/img_04938.jpg
(1, 500, 500, 3)
 53/100 [==============>...............] - ETA: 51s - loss: 4.7703../input/train/ALB/img_00275.jpg
(1, 500, 500, 3)
 54/100 [===============>..............] - ETA: 50s - loss: 4.7179../input/train/SHARK/img_02259.jpg
(1, 500, 500, 3)
 55/100 [===============>..............] - ETA: 49s - loss: 4.6595../input/train/ALB/img_05818.jpg
(1, 500, 500, 3)
 56/100 [===============>..............] - ETA: 48s - loss: 4.6045../input/train/YFT/img_07742.jpg
(1, 500, 500, 3)
 57/100 [================>.............] - ETA: 47s - loss: 4.5640../input/train/OTHER/img_07536.jpg
(1, 500, 500, 3)
 58/100 [================>.............] - ETA: 46s - loss: 4.5527../input/train/NoF/img_04071.jpg
(1, 500, 500, 3)
 59/100 [================>.............] - ETA: 44s - loss: 4.5142../input/train/OTHER/img_01175.jpg
(1, 500, 500, 3)
 60/100 [=================>............] - ETA: 43s - loss: 4.4461../input/train/BET/img_07059.jpg
(1, 500, 500, 3)
 61/100 [=================>............] - ETA: 42s - loss: 4.4773../input/train/ALB/img_06711.jpg
(1, 500, 500, 3)
 62/100 [=================>............] - ETA: 41s - loss: 4.4244../input/train/ALB/img_03335.jpg
(1, 500, 500, 3)
 63/100 [=================>............] - ETA: 40s - loss: 4.3974../input/train/OTHER/img_01697.jpg
(1, 500, 500, 3)
 64/100 [==================>...........] - ETA: 39s - loss: 4.3548../input/train/BET/img_06453.jpg
(1, 500, 500, 3)
 65/100 [==================>...........] - ETA: 38s - loss: 4.3411../input/train/NoF/img_01219.jpg
(1, 500, 500, 3)
 66/100 [==================>...........] - ETA: 37s - loss: 4.2956../input/train/OTHER/img_07922.jpg
(1, 500, 500, 3)
 67/100 [===================>..........] - ETA: 36s - loss: 4.2712../input/train/NoF/img_00553.jpg
(1, 500, 500, 3)
 68/100 [===================>..........] - ETA: 35s - loss: 4.2386../input/train/DOL/img_06299.jpg
(1, 500, 500, 3)
 69/100 [===================>..........] - ETA: 34s - loss: 4.2061../input/train/DOL/img_00165.jpg
(1, 500, 500, 3)
 70/100 [====================>.........] - ETA: 33s - loss: 4.1770../input/train/ALB/img_07291.jpg
(1, 500, 500, 3)
 71/100 [====================>.........] - ETA: 31s - loss: 4.1516../input/train/ALB/img_01265.jpg
(1, 500, 500, 3)
 72/100 [====================>.........] - ETA: 30s - loss: 4.1084../input/train/ALB/img_03842.jpg
(1, 500, 500, 3)
 73/100 [====================>.........] - ETA: 29s - loss: 4.0751../input/train/YFT/img_04208.jpg
(1, 500, 500, 3)
 74/100 [=====================>........] - ETA: 28s - loss: 4.0541../input/train/ALB/img_02719.jpg
(1, 500, 500, 3)
 75/100 [=====================>........] - ETA: 27s - loss: 4.0327../input/train/ALB/img_00713.jpg
(1, 500, 500, 3)
 76/100 [=====================>........] - ETA: 26s - loss: 4.0053../input/train/OTHER/img_07279.jpg
(1, 500, 500, 3)
 77/100 [======================>.......] - ETA: 25s - loss: 3.9787../input/train/ALB/img_00957.jpg
(1, 500, 500, 3)
 78/100 [======================>.......] - ETA: 24s - loss: 3.9550../input/train/YFT/img_02351.jpg
(1, 500, 500, 3)
 79/100 [======================>.......] - ETA: 23s - loss: 3.9388../input/train/ALB/img_06868.jpg
(1, 500, 500, 3)
 80/100 [=======================>......] - ETA: 22s - loss: 3.9268../input/train/ALB/img_03089.jpg
(1, 500, 500, 3)
 81/100 [=======================>......] - ETA: 20s - loss: 3.8998../input/train/ALB/img_03694.jpg
(1, 500, 500, 3)
 82/100 [=======================>......] - ETA: 19s - loss: 3.8700../input/train/NoF/img_00673.jpg
(1, 500, 500, 3)
 83/100 [=======================>......] - ETA: 18s - loss: 3.8371../input/train/ALB/img_00956.jpg
(1, 500, 500, 3)
 84/100 [========================>.....] - ETA: 17s - loss: 3.8457../input/train/ALB/img_06075.jpg
(1, 500, 500, 3)
 85/100 [========================>.....] - ETA: 16s - loss: 3.8245../input/train/ALB/img_01485.jpg
(1, 500, 500, 3)
 86/100 [========================>.....] - ETA: 15s - loss: 3.7952../input/train/ALB/img_06344.jpg
(1, 500, 500, 3)
 87/100 [=========================>....] - ETA: 14s - loss: 3.7860../input/train/ALB/img_07890.jpg
(1, 500, 500, 3)
 88/100 [=========================>....] - ETA: 13s - loss: 3.7549../input/train/BET/img_04931.jpg
(1, 500, 500, 3)
 89/100 [=========================>....] - ETA: 12s - loss: 3.7615../input/train/ALB/img_06143.jpg
(1, 500, 500, 3)
 90/100 [==========================>...] - ETA: 11s - loss: 3.7412../input/train/ALB/img_02436.jpg
(1, 500, 500, 3)
 91/100 [==========================>...] - ETA: 9s - loss: 3.7225 ../input/train/ALB/img_05350.jpg
(1, 500, 500, 3)
 92/100 [==========================>...] - ETA: 8s - loss: 3.7029../input/train/OTHER/img_03530.jpg
(1, 500, 500, 3)
 93/100 [==========================>...] - ETA: 7s - loss: 3.6942../input/train/DOL/img_04743.jpg
(1, 500, 500, 3)
 94/100 [===========================>..] - ETA: 6s - loss: 3.6660../input/train/ALB/img_05135.jpg
(1, 500, 500, 3)
 95/100 [===========================>..] - ETA: 5s - loss: 3.6372../input/train/ALB/img_07822.jpg
(1, 500, 500, 3)
 96/100 [===========================>..] - ETA: 4s - loss: 3.6193../input/train/YFT/img_04558.jpg
(1, 500, 500, 3)
 97/100 [============================>.] - ETA: 3s - loss: 3.5880../input/train/YFT/img_00477.jpg
(1, 500, 500, 3)
 98/100 [============================>.] - ETA: 2s - loss: 3.5576../input/train/ALB/img_03323.jpg
(1, 500, 500, 3)
 99/100 [============================>.] - ETA: 1s - loss: 3.5525../input/train/OTHER/img_03191.jpg
(1, 500, 500, 3)
100/100 [==============================] - 110s - loss: 3.5291