Using real time data augmentation
----------------------------------------
Epoch 0
----------------------------------------
Training...
50000/50000 [==============================] - 5094s - train loss: 1.5760
Testing...
10000/10000 [==============================] - 322s - test loss: 1.2737
----------------------------------------
Epoch 1
----------------------------------------
Training...
50000/50000 [==============================] - 5032s - train loss: 1.2557
Testing...
10000/10000 [==============================] - 323s - test loss: 1.0844
----------------------------------------
Epoch 2
----------------------------------------
Training...
50000/50000 [==============================] - 5032s - train loss: 1.1119
Testing...
10000/10000 [==============================] - 322s - test loss: 0.9516
----------------------------------------
Epoch 3
----------------------------------------
Training...
50000/50000 [==============================] - 5023s - train loss: 1.0350
Testing...
10000/10000 [==============================] - 323s - test loss: 0.9255
----------------------------------------
Epoch 4
----------------------------------------
Training...
50000/50000 [==============================] - 5025s - train loss: 0.9826
Testing...
10000/10000 [==============================] - 322s - test loss: 0.8485
----------------------------------------
Epoch 5
----------------------------------------
Training...
50000/50000 [==============================] - 5028s - train loss: 0.9372
Testing...
10000/10000 [==============================] - 323s - test loss: 0.8029
----------------------------------------
Epoch 6
----------------------------------------
Training...
50000/50000 [==============================] - 5565s - train loss: 0.9038
Testing...
10000/10000 [==============================] - 369s - test loss: 0.7880
----------------------------------------
Epoch 7
----------------------------------------
Training...
50000/50000 [==============================] - 5789s - train loss: 0.8790
Testing...
10000/10000 [==============================] - 342s - test loss: 0.7912
----------------------------------------
Epoch 8
----------------------------------------
Training...
29504/50000 [================>.............] - ETA: 2128s - train loss: 0.8570
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-18-7b8b1bcd330e> in <module>()
38 progbar = generic_utils.Progbar(X_train.shape[0])
39 for X_batch, Y_batch in datagen.flow(X_train, Y_train):
---> 40 loss = model.train_on_batch(X_batch, Y_batch)
41 progbar.add(X_batch.shape[0], values=[("train loss", loss)])
42
/Users/rbussman/anaconda/lib/python2.7/site-packages/keras/models.pyc in train_on_batch(self, X, y, accuracy, sample_weight)
357 return self._train_with_acc(*ins)
358 else:
--> 359 return self._train(*ins)
360
361
/Users/rbussman/anaconda/lib/python2.7/site-packages/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
593 t0_fn = time.time()
594 try:
--> 595 outputs = self.fn()
596 except Exception:
597 if hasattr(self.fn, 'position_of_error'):
/Users/rbussman/anaconda/lib/python2.7/site-packages/theano/gof/vm.pyc in __call__(self)
231 old_s[0] = None
232 except:
--> 233 link.raise_with_op(node, thunk)
234
235
/Users/rbussman/anaconda/lib/python2.7/site-packages/theano/gof/vm.pyc in __call__(self)
227 for thunk, node, old_storage in zip(self.thunks, self.nodes,
228 self.post_thunk_clear):
--> 229 thunk()
230 for old_s in old_storage:
231 old_s[0] = None
/Users/rbussman/anaconda/lib/python2.7/site-packages/theano/gof/op.pyc in rval()
741
742 def rval():
--> 743 fill_storage()
744 for o in node.outputs:
745 compute_map[o][0] = True
/Users/rbussman/anaconda/lib/python2.7/site-packages/theano/gof/cc.pyc in __call__(self)
1524
1525 def __call__(self):
-> 1526 failure = run_cthunk(self.cthunk)
1527 if failure:
1528 task, taskname, id = self.find_task(failure)
KeyboardInterrupt: