BNN on Pynq

This notebook covers how to use Binary Neural Networks on Pynq. It shows an example of image recognition with a binarized neural network inspired at VGG-16, featuring 6 convolutional layers, 3 max pool layers and 3 fully connected layers

1. Instantiate a Classifier

Creating a classifier will automatically download the correct bitstream onto the device and load the weights trained on the specified dataset. By default there are three sets of weights available for the BNN version of the CNV network using 1 bit weights and 1 activation (W1A1) - this example uses the German Road Sign dataset.


In [1]:
import bnn
print(bnn.available_params(bnn.NETWORK_CNVW1A1))

classifier = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1, 'road-signs', bnn.RUNTIME_HW)


['streetview', 'road-signs', 'cifar10']

2. List the available classes

The selected dataset can classify images in 42 classes, the names of which are accessible through the classifier.


In [2]:
print(classifier.classes)


['20 Km/h', '30 Km/h', '50 Km/h', '60 Km/h', '70 Km/h', '80 Km/h', 'End 80 Km/h', '100 Km/h', '120 Km/h', 'No overtaking', 'No overtaking for large trucks', 'Priority crossroad', 'Priority road', 'Give way', 'Stop', 'No vehicles', 'Prohibited for vehicles with a permitted gross weight over 3.5t including their trailers, and for tractors except passenger cars and buses', 'No entry for vehicular traffic', 'Danger Ahead', 'Bend to left', 'Bend to right', 'Double bend (first to left)', 'Uneven road', 'Road slippery when wet or dirty', 'Road narrows (right)', 'Road works', 'Traffic signals', 'Pedestrians in road ahead', 'Children crossing ahead', 'Bicycles prohibited', 'Risk of snow or ice', 'Wild animals', 'End of all speed and overtaking restrictions', 'Turn right ahead', 'Turn left ahead', 'Ahead only', 'Ahead or right only', 'Ahead or left only', 'Pass by on right', 'Pass by on left', 'Roundabout', 'End of no-overtaking zone', 'End of no-overtaking zone for vehicles with a permitted gross weight over 3.5t including their trailers, and for tractors except passenger cars and buses', 'Not a roadsign']

3. Open images to be classified

The images that we want to classify are loaded and shown to the user


In [3]:
from PIL import Image
import numpy as np
from os import listdir
from os.path import isfile, join
from IPython.display import display

imgList = [f for f in listdir("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/") if isfile(join("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/", f))]

images = []
   
for imgFile in imgList:
	img = Image.open("/home/xilinx/jupyter_notebooks/bnn/pictures/road_signs/" + imgFile)
	images.append(img)    
	img.thumbnail((64, 64), Image.ANTIALIAS)
	display(img)


4. Launching BNN in hardware

The images are passed in the PL and the inference is performed. The images will be automatically formatted to the required format that is processed by CNV network (Cifar-10 format).


In [4]:
results = classifier.classify_images(images)
print("Identified classes: {0}".format(results))
for index in results:
    print("Identified class name: {0}".format((classifier.class_name(index))))


Inference took 915.00 microseconds, 305.00 usec per image
Classification rate: 3278.69 images per second
Identified classes: [41 27 14]
Identified class name: End of no-overtaking zone
Identified class name: Pedestrians in road ahead
Identified class name: Stop

5. Launching BNN in software

The inference on the same image is performed in sofware on the ARM core by passing the RUNTIME_SW attribute to the Image-Classifier


In [5]:
sw_class = bnn.CnvClassifier(bnn.NETWORK_CNVW1A1,"road-signs", bnn.RUNTIME_SW)

results = sw_class.classify_images(images)
print("Identified classes: {0}".format(results))
for index in results:
    print("Identified class name: {0}".format((classifier.class_name(index))))


Inference took 1252522.97 microseconds, 417507.66 usec per image
Classification rate: 2.40 images per second
Identified classes: [41 27 14]
Identified class name: End of no-overtaking zone
Identified class name: Pedestrians in road ahead
Identified class name: Stop

6. Locate objects within a scene

This example is going to create an array of images from a single input image, tiling the image to try and locate an object. This image shows a road intersection and we're aiming at finding the stop sign.


In [6]:
from PIL import Image
image_file = "/home/xilinx/jupyter_notebooks/bnn/pictures/street_with_stop.JPG"
im = Image.open(image_file)
im


Out[6]:

Here we launch the classification on all the tiles from the source image, and all image in which the BNN identified a STOP signal is shown


In [7]:
images = []
bounds = []
for s in [64,96]:
    stride = s // 4
    x_tiles = im.width // stride
    y_tiles = im.height // stride
    
    for j in range(y_tiles):
        for i in range(x_tiles):
            bound = (stride * i, stride * j, stride * i + s, stride * j + s)
            if bound[2] <= im.width and bound[3] < im.height:
                c = im.crop(bound)
                images.append(c)
                bounds.append(bound)

print(len(images))


1330

In [8]:
results = classifier.classify_images(images)
stop = results == 14
indicies = []
indicies = stop.nonzero()[0]
from PIL import ImageDraw
im2 = Image.open(image_file)
draw2 = ImageDraw.Draw(im2)
for i in indicies:
    draw2.rectangle(bounds[i], outline='red')

im2


Inference took 145902.00 microseconds, 109.70 usec per image
Classification rate: 9115.71 images per second
Out[8]:

(Optional) the classification can be post-analyzed in order to pick only tiles in which the STOP signal is identified with a score higher than a certain threshold


In [9]:
result = classifier.classify_images_details(images)
result=result.reshape(len(images),44)
from PIL import ImageDraw

draw = ImageDraw.Draw(im)
i=0
for image in images:
    if result[i][14] > 455:
        draw.rectangle(bounds[i], outline='red')
    i=i+1    
    
im


Inference took 145901.00 microseconds, 109.70 usec per image
Classification rate: 9115.77 images per second
Out[9]:

7. Reseting the device


In [10]:
from pynq import Xlnk

xlnk = Xlnk();
xlnk.xlnk_reset()