In [0]:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
This notebook uses a set of TensorFlow training scripts to perform transfer-learning on a quantization-aware object detection model and then convert it for compatibility with the Edge TPU.
Specifically, this tutorial shows you how to retrain a MobileNet V1 SSD model so that it detects two pets: Abyssinian cats and American Bulldogs (from the Oxford-IIIT Pets Dataset), using TensorFlow r1.12.
Beware that, compared to a desktop computer, this training can take a lot longer in Colab because Colab provides limited resources for long-running operations. So you'll likely see faster training speeds if you connect this notebook to a local runtime, or instead follow the tutorial to run this training in Docker (which includes more documentation about this process).
In [0]:
! pip uninstall tensorflow -y
! pip install tensorflow==1.12
In [0]:
import tensorflow as tf
print(tf.__version__)
In [0]:
! git clone https://github.com/tensorflow/models.git
In [0]:
! cd models && git checkout f788046ca876a8820e05b0b48c1fc2e16b0955bc
In [0]:
! git clone https://github.com/google-coral/tutorials.git
! cp -r tutorials/docker/object_detection/scripts/* models/research/
In [0]:
! apt-get install -y python python-tk
! pip install Cython contextlib2 pillow lxml jupyter matplotlib
In [0]:
# Get protoc 3.0.0, rather than the old version already in the container
! wget https://www.github.com/google/protobuf/releases/download/v3.0.0/protoc-3.0.0-linux-x86_64.zip
! unzip protoc-3.0.0-linux-x86_64.zip -d proto3
! mkdir -p local/bin && mkdir -p local/include
! mv proto3/bin/* local/bin
! mv proto3/include/* local/include
! rm -rf proto3 protoc-3.0.0-linux-x86_64.zip
In [0]:
# Install pycocoapi
! git clone --depth 1 https://github.com/cocodataset/cocoapi.git
! (cd cocoapi/PythonAPI && make -j8)
! cp -r cocoapi/PythonAPI/pycocotools/ models/research/
! rm -rf cocoapi
In [0]:
# Run protoc on the object detection repo (generate .py files from .proto)
% cd models/research/
! ../../local/bin/protoc object_detection/protos/*.proto --python_out=.
In [0]:
import os
os.environ['PYTHONPATH'] += ":/content/models/research:/content/models/research/slim"
Just to verify everything is correctly set up:
In [0]:
! python object_detection/builders/model_builder_test.py
To train with different images, read how to configure your own training data.
In [0]:
! ./prepare_checkpoint_and_dataset.sh --network_type mobilenet_v1_ssd --train_whole_model false
The following script takes several hours to finish in Colab. (You can shorten by reducing the steps, but that reduces the final accuracy.)
If you didn't already select "Run all" then you should run all remaining cells now. That will ensure the rest of the notebook completes while you are away, avoiding the chance that the Colab runtime times-out and you lose the training data before you download the model.
In [0]:
%env NUM_TRAINING_STEPS=500
%env NUM_EVAL_STEPS=100
# If you're retraining the whole model, we suggest thes values:
# %env NUM_TRAINING_STEPS=50000
# %env NUM_EVAL_STEPS=2000
In [0]:
! ./retrain_detection_model.sh --num_training_steps $NUM_TRAINING_STEPS --num_eval_steps $NUM_EVAL_STEPS
As training progresses, you can see new checkpoint files appear in the models/research/learn_pet/train/ directory.
In [0]:
! ./convert_checkpoint_to_edgetpu_tflite.sh --checkpoint_num $NUM_TRAINING_STEPS
In [0]:
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
In [0]:
%cd learn_pet/models/
! ls
In [0]:
! edgetpu_compiler output_tflite_graph.tflite
Download the files:
In [0]:
from google.colab import files
files.download('output_tflite_graph_edgetpu.tflite')
files.download('labels.txt')
If you get a "Failed to fetch" error here, it's probably because the files weren't done saving. So just wait a moment and try again.
Also look out for a browser popup that might need approval to download the files.
You can now run the model on your Coral device with acceleration on the Edge TPU.
To get started, try using this code for object detection with the TensorFlow Lite API. Just follow the instructions on that page to set up your device, copy the output_tflite_graph_edgetpu.tflite and labels.txt files to your Coral Dev Board or device with a Coral Accelerator, and pass it a photo to see the detected objects.
Check out more examples for running inference at coral.ai/examples.
All the scripts used in this notebook come from the following locations:
More explanation of the steps in this tutorial is available at https://coral.ai/docs/edgetpu/retrain-detection/.