In [0]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.
To run this example, we first need to install serveral required packages, including Model Maker package that in github repo.
In [0]:
!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]
Import the required packages.
In [0]:
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_maker.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task import image_classifier
from tensorflow_examples.lite.model_maker.core.task.model_spec import mobilenet_v2_spec
from tensorflow_examples.lite.model_maker.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
In [0]:
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
You could replace image_path
with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
If you prefer not to upload your images to the cloud, you could try to run the library locally following the guide in github.
Step 1. Load input data specific to an on-device ML app. Split it to training data and testing data.
In [0]:
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
Step 2. Customize the TensorFlow model.
In [0]:
model = image_classifier.create(train_data)
Step 3. Evaluate the model.
In [0]:
loss, accuracy = model.evaluate(test_data)
Step 4. Export to TensorFlow Lite model.
Here, we export TensorFlow Lite model with metadata which provides a standard for model descriptions. You could download it in the left sidebar same as the uploading part for your own use.
In [0]:
model.export(export_dir='.')
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in image classification reference app.
Currently, we support several models such as EfficientNet-Lite* models, MobileNetV2, ResNet50 as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.
The following walks through this end-to-end example step by step to show more detail.
The flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.
The dataset has the following directory structure:
flower_photos |__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ... |__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ... |__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ... |__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ... |__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
In [0]:
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
Use ImageClassifierDataLoader
class to load data.
As for from_folder()
method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
In [0]:
data = ImageClassifierDataLoader.from_folder(image_path)
Split it to training data (80%), validation data (10%, optional) and testing data (10%).
In [0]:
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
Show 25 image examples with labels.
In [0]:
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
In [0]:
model = image_classifier.create(train_data, validation_data=validation_data)
Have a look at the detailed model structure.
In [0]:
model.summary()
In [0]:
loss, accuracy = model.evaluate(test_data)
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
In [0]:
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
In [0]:
model.export(export_dir='.')
The TensorFlow Lite model file and label file could be used in image classification reference app.
As for android reference app as an example, we could add flower_classifier.tflite
and flower_label.txt
in assets folder. Meanwhile, change label filename in code and TensorFlow Lite file name in code. Thus, we could run the retrained float TensorFlow Lite model on the android app.
Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
In [0]:
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('model.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For EfficientNet-Lite0, input image should be normalized to [0, 1]
and resized to [224, 224, 3]
.
The create
function is the critical part of this library. It uses transfer learning with a pretrained model similiar to the tutorial.
The create
function contains the following steps:
validation_ratio
and test_ratio
. The default value of validation_ratio
and test_ratio
are 0.1
and 0.1
.dropout_rate
between head layer and pre-trained model. The default dropout_rate
is the default dropout_rate
value from make_image_classifier_lib by TensorFlow Hub.[0, 1]
and the input image size [224, 224, 3]
.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc.
This library supports EfficientNet-Lite models, MobileNetV2, ResNet50 by now. EfficientNet-Lite are a family of image classification models that could achieve state-of-art accuracy and suitable for Edge devices. The default model is EfficientNet-Lite0.
We could switch model to MobileNetV2 by just setting parameter model_spec
to mobilenet_v2_spec
in create
method.
In [0]:
model = image_classifier.create(train_data, model_spec=mobilenet_v2_spec, validation_data=validation_data)
Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
In [0]:
loss, accuracy = model.evaluate(test_data)
Moreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.
As Inception V3 model as an example, we could define inception_v3_spec
which is an object of ImageModelSpec
and contains the specification of the Inception V3 model.
We need to specify the model name name
, the url of the TensorFlow Hub model uri
. Meanwhile, the default value of input_image_shape
is [224, 224]
. We need to change it to [299, 299]
for Inception V3 model.
In [0]:
inception_v3_spec = ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
Then, by setting parameter model_spec
to inception_v3_spec
in create
method, we could retrain the Inception V3 model.
The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end.
If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export ModelSpec in TensorFlow Hub.
Then start to define ImageModelSpec
object like the process above.
We could also change the training hyperparameters like epochs
, dropout_rate
and batch_size
that could affect the model accuracy. For instance,
epochs
: more epochs could achieve better accuracy until it converges but training for too many epochs may lead to overfitting.dropout_rate
: avoid overfitting.batch_size
: number of samples to use in one training step.validation_data
: number of samples to use in one training step.For example, we could train with more epochs.
In [0]:
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
Evaluate the newly retrained model with 10 training epochs.
In [0]:
loss, accuracy = model.evaluate(test_data)