This notebook covers how to use Binary Neural Networks on Pynq. It shows an example of image recognition with a binarized neural network inspired at VGG-16, featuring 6 convolutional layers, 3 max pool layers and 3 fully connected layers
Creating a classifier will automatically download the correct bitstream onto the device and load the weights trained on the specified dataset. By default there are three sets of weights available for the BNN version of the CNV network using 1 bit weights and 1 activation (W1A1) - this example uses the German Road Sign dataset.
In [1]:
import bnn
print(bnn.available_params(bnn.NETWORK_CNVW1A1))
classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1, 'road-signs', bnn.RUNTIME_HW)
In [2]:
print(classifier.classes)
In [3]:
from PIL import Image
import numpy as np
from os import listdir
from os.path import isfile, join
from IPython.display import display
imgList = [f for f in listdir("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/") if isfile(join("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/", f))]
images = []
for imgFile in imgList:
img = Image.open("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/" + imgFile)
images.append(img)
img.thumbnail((64, 64), Image.ANTIALIAS)
display(img)
In [4]:
results = classifier.classify_images(images)
print("Identified classes: {0}".format(results))
for index in results:
print("Identified class name: {0}".format((classifier.class_name(index))))
In [5]:
sw_class = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1,"road-signs", bnn.RUNTIME_SW)
results = sw_class.classify_images(images)
print("Identified classes: {0}".format(results))
for index in results:
print("Identified class name: {0}".format((classifier.class_name(index))))
In [6]:
from PIL import Image
image_file = "/home/xilinx/jupyter_notebooks/bnn/pictures/street_with_stop.JPG"
im = Image.open(image_file)
im
Out[6]:
Here we launch the classification on all the tiles from the source image, and all image in which the BNN identified a STOP signal is shown
In [7]:
images = []
bounds = []
for s in [64,96]:
stride = s // 4
x_tiles = im.width // stride
y_tiles = im.height // stride
for j in range(y_tiles):
for i in range(x_tiles):
bound = (stride * i, stride * j, stride * i + s, stride * j + s)
if bound[2] <= im.width and bound[3] < im.height:
c = im.crop(bound)
images.append(c)
bounds.append(bound)
print(len(images))
In [8]:
results = classifier.classify_images(images)
stop = results == 14
indicies = []
indicies = stop.nonzero()[0]
from PIL import ImageDraw
im2 = Image.open(image_file)
draw2 = ImageDraw.Draw(im2)
for i in indicies:
draw2.rectangle(bounds[i], outline='red')
im2
Out[8]:
(Optional) the classification can be post-analyzed in order to pick only tiles in which the STOP signal is identified with a score higher than a certain threshold
In [9]:
result = classifier.classify_images_details(images)
result=result.reshape(len(images),44)
from PIL import ImageDraw
draw = ImageDraw.Draw(im)
i=0
for image in images:
if result[i][14] > 455:
draw.rectangle(bounds[i], outline='red')
i=i+1
im
Out[9]:
In [10]:
from pynq import Xlnk
xlnk = Xlnk();
xlnk.xlnk_reset()