Save and Load

In this notebook, we will

  • train a LeNet and save it
  • load the model from the file
  • test the loaded model

In [1]:
from tfs.models import LeNet
from tfs.dataset import Mnist
net = LeNet()
dataset = Mnist()
  • modify the network:
    • L1 regularizer
    • SGD optimizer

In [2]:
from tfs.core.optimizer import GradientDecentOptimizer
from tfs.core.regularizers import L1

net.optimizer = GradientDecentOptimizer(net)
net.regularizer = L1(net,l1=0.001)

net.build()


Out[2]:
<tf.Tensor 'prob:0' shape=(?, 10) dtype=float32>

In [3]:
net.fit(dataset,batch_size=200,n_epoch=1,max_step=100)


step 10. loss 16.249886, score:0.403800
step 20. loss 15.966318, score:0.514600
step 30. loss 15.660644, score:0.597500
step 40. loss 15.390107, score:0.671600
step 50. loss 15.192878, score:0.712800
step 60. loss 15.279219, score:0.747200
step 70. loss 15.208706, score:0.776600
step 80. loss 15.325734, score:0.794400
step 90. loss 15.101788, score:0.813800
step 100. loss 15.025336, score:0.827500
Out[3]:
<tfs.models.lenet.LeNet at 0x103f28c90>

In [4]:
net.save('lenet_epoch_1')

In [5]:
!ls ./


0.Train-LeNet.ipynb           4.Custom-Initializer.ipynb
1.Save-and-load.ipynb         lenet_epoch_1.model
2.Visualize-LeNet.ipynb       lenet_epoch_1.modeldef
3.Define-Custom-Network.ipynb

load the model


In [6]:
from tfs.network import Network
net2 = Network()
net2.load('lenet_epoch_1')

In [7]:
print net2


Name:conv1     	Type:Conv2d(knum=20,ksize=[5, 5],strides=[1, 1],padding=VALID,activation=None)
Name:pool1     	Type:MaxPool(ksize=[2, 2],strides=[2, 2])
Name:conv2     	Type:Conv2d(knum=50,ksize=[5, 5],strides=[1, 1],padding=VALID,activation=relu)
Name:pool2     	Type:MaxPool(ksize=[2, 2],strides=[2, 2])
Name:ip1       	Type:FullyConnect(outdim=500,activation=relu)
Name:ip2       	Type:FullyConnect(outdim=10,activation=None)
Name:prob      	Type:Softmax()

In [8]:
print net2.optimizer


GradientDecentOptimizer
-----param-----
learning_rate=0.001,print_names=['learning_rate']
---------------

In [9]:
print net2.initializer


DefaultInitializer
-----param-----
print_names=[]
-----nodes-----
conv1
    conv1/weights:0     xavier(seed=None,uniform=True,mode=FAN_AVG,factor=1.0)
    conv1/biases:0      constant(val=0.1)
pool1
conv2
    conv2/biases:0      constant(val=0.1)
    conv2/weights:0     xavier(seed=None,uniform=True,mode=FAN_AVG,factor=1.0)
pool2
ip1
    ip1/weights:0       xavier(seed=None,uniform=True,mode=FAN_AVG,factor=1.0)
    ip1/biases:0        constant(val=0.1)
ip2
    ip2/biases:0        constant(val=0.1)
    ip2/weights:0       xavier(seed=None,uniform=True,mode=FAN_AVG,factor=1.0)
prob

In [10]:
print net2.losser


CrossEntropyByLogitLabel (ip2)
-----param-----
----------------

In [11]:
print 'accuracy',net2.score(dataset.test)


accuracy 0.8275
  • fine-tune the loaded model

In [12]:
net2.fit(dataset,batch_size=200,n_epoch=1,max_step=100)


step 10. loss 14.938343, score:0.841300
step 20. loss 14.967062, score:0.850700
step 30. loss 14.971493, score:0.854300
step 40. loss 14.915509, score:0.864300
step 50. loss 14.754118, score:0.864100
step 60. loss 14.821177, score:0.871000
step 70. loss 14.832127, score:0.882600
step 80. loss 15.034270, score:0.875900
step 90. loss 14.842802, score:0.886700
step 100. loss 14.856237, score:0.889400
Out[12]:
<tfs.network.base.Network at 0x1187b7590>

In [13]:
net2.score(dataset.test)


Out[13]:
0.88939999999999997

In [ ]: