In [0]:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Create ML Kit Image labeling model with Tensorflow Lite Model Maker

Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.

This notebook shows an end-to-end example that utilizes this Model Maker library to create an image labeling model for ML Kit custom Image Labeling and Object Detection and Tracking features.

Prerequisites

To run this example, we first need to install serveral required packages, including Model Maker package that in github repo.


In [0]:
!pip install git+https://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_maker]

Import the required packages.


In [0]:
import numpy as np

import tensorflow as tf

from tensorflow_examples.lite.model_maker.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_maker.core.task import image_classifier
from tensorflow_examples.lite.model_maker.core.task.model_spec import ImageModelSpec
from tensorflow_examples.lite.model_maker.core.task import configs
from tensorflow_examples.lite.model_maker.core import compat
import matplotlib.pyplot as plt

Make sure to set tf_version as 1 to produce models with uint8 input and output types to be compatible with ML Kit.


In [0]:
compat.setup_tf_behavior(tf_version=1)

Get the data path

Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.


In [0]:
image_path = tf.keras.utils.get_file(
      'flower_photos',
      'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
      untar=True)

You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.

Make sure the file structure is correct. For example, the flower dataset contains 3670 images belonging to 5 classes.

The dataset has the following directory structure:

flower_photos
|__ daisy
    |______ 100080576_f52e8ee070_n.jpg
    |______ 14167534527_781ceb1b7a_n.jpg
    |______ ...
|__ dandelion
    |______ 10043234166_e6dd915111_n.jpg
    |______ 1426682852_e62169221f_m.jpg
    |______ ...
|__ roses
    |______ 102501987_3cdb8e5394_n.jpg
    |______ 14982802401_a3dfb22afb.jpg
    |______ ...
|__ sunflowers
    |______ 12471791574_bb1be83df4.jpg
    |______ 15122112402_cafa41934f.jpg
    |______ ...
|__ tulips
    |______ 13976522214_ccec508fe7.jpg
    |______ 14487943607_651e8062a1_m.jpg
    |______ ...

If you prefer not to upload your images to the cloud, you could try to run the library locally following the guide in github.

Run the example

The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.

Step 1. Load input data specific to an on-device ML app. Split it to training data and testing data.


In [0]:
train_data, test_data = ImageClassifierDataLoader.from_folder(image_path).split(0.9)

Step 2. Customize the TensorFlow model.


In [0]:
model = image_classifier.create(train_data)

Step 3. Evaluate the model.


In [0]:
loss, accuracy = model.evaluate(test_data)

Step 4. Setup config for quantized model with uint8 input and output type


In [0]:
config = configs.QuantizationConfig.create_full_integer_quantization(
    representative_data=test_data, is_integer_only=True)

Step 4. Export to TensorFlow Lite model.

Here, we export TensorFlow Lite model with metadata which provides a standard for model descriptions. You could download it in the left sidebar same as the uploading part for your own use.


In [0]:
model.export(export_dir='.', quantization_config=config)

After this simple 4 steps, we could further use TensorFlow Lite model file in ML Kit Image Labeling and Object Detection and Tracking features.

Tensorflow Lite Model Maker allows changing model architecture to suit different needs. Here is the instructions of how to change model architecture: https://www.tensorflow.org/lite/tutorials/model_maker_image_classification#change_the_model