First make sure you set up your google account with a Cloud Vision Project. Then you need to enter billing details (to verify you are human) and set up credentials. This will download a .json
file.
Before you work in this notebook, you have to do this to let python know about your application default credentials. So put the .json
file in this folder (but do not commit it into git!). Then export GOOGLE_APPLICATION_CREDENTIALS=<path to your json file>
.
This code is based on Google's face detection tutorial.
google-api-python-client==1.5.0
Pillow==3.1.1
In [5]:
from googleapiclient import discovery
import httplib2
from oauth2client.client import GoogleCredentials
import base64
from PIL import Image
from PIL import ImageDraw
In [6]:
# Set up a connection to the Google service
DISCOVERY_URL='https://{api}.googleapis.com/$discovery/rest?version={apiVersion}'
credentials = GoogleCredentials.get_application_default()
service = discovery.build('vision', 'v1', credentials=credentials, discoveryServiceUrl=DISCOVERY_URL)
In [7]:
# load the input images
input_filename = "dalai-lama.jpg"
image = open(input_filename,'rb')
image_content = image.read()
In [8]:
# fire off request for face detection from Google
batch_request = [{
'image': {
'content': base64.b64encode(image_content)
},
'features': [{
'type': 'FACE_DETECTION',
'maxResults': 4,
}]
}]
request = service.images().annotate(body={
'requests': batch_request,
})
response = request.execute()
print('Found %s face%s' % (len(response['responses']), '' if len(response['responses']) == 1 else 's'))
In [9]:
#print response
# print out emotions for each one
for result in response['responses']:
for annotation in result['faceAnnotations']:
for emotion in ['joy','sorrow','surprise','anger']:
print "%s: %s" % (emotion, annotation[emotion+'Likelihood'])
In [ ]: