The web has long evolved from user-consumption to device consumption. In the early days of the web when you wanted to check the weather, you opened up your browser and visited a website. Nowadays your smart watch / smart phone retrieves the weather for you and displays it on the device. Your device can't predict the weather. It's simply consuming a weather based service.
The key to making device consumption work are API's (Application Program Interfaces). Products we use everyday like smartphones, Amazon's Alexa, and gaming consoles all rely on API's. They seem "smart" and "powerful" but in actuality they're only interfacing with smart and powerful services in the cloud.
API consumption is the new reality of programming; it is why we cover it in this course. Once you undersand how to conusme API's you can write a program to do almost anything and harness the power of the internet to make your own programs look "smart" and "powerful."
This lab will be a walk-through for how to use a Web API. Specifically we will use the Microsoft Azure Text Analytics API to features like sentiment and entity recognition to our programs.
In [ ]:
# Run this to make sure you have the pre-requisites!
!pip install -q requests
In [ ]:
# start by importing the modules we will need
import requests
import json
First you will need to sign up for Microsoft Azure for Students. This is free for all Syracuse University Students. Azure is a cloud provider from Microsoft.
netid@syr.eduFrom inside the Azure portal:
ist256-text-analyticsAzure for Students(US) West USF0 (Important: Select the free tier!)ist256-yournetid (You might have to create it first replace yournetid with your actual netid!)
In [ ]:
# record these values in code, too
key = "key-here"
endpoint = "endpoint-url-here"
The documentation for the API can be found here: https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-0-Preview-1
Let's give it a try by using named entity recognition. This attempts to extract meaning from text and is quite useful in applications which require natual language understanding.
For all requests, you provide a list of documents you wish to enact on. In this case we will extract entities from the following phrases:
In [ ]:
url = f'{endpoint}text/analytics/v3.0-preview.1/entities/recognition/general'
header = { 'Ocp-Apim-Subscription-Key' : key}
documents = {"documents": [
{"id": "1", "text": "I would not pay $5 to see that Star Wars movie next week." },
{"id": "2", "text": "Syracuse and Rochester are nicer cities than Buffalo." }
]
}
response = requests.post(url,headers=header, json=documents)
entities = response.json()
entities
Notice the service does a nice job recogizing $5 as a quantity and next week as a date range.
It also figures out that Syracuse, Rochester and Buffalo are all locations.
For each recoginzed entity, you are provided with a score (confidence score between 0-1), a type, and sub-type as appropriate.
Re-write the example above to perform named entity extraction on the following text:
Four out of five New York City coders prefer Google to Microsoft.
In [1]:
# TODO Write code here
In [ ]:
text = "As of this year, my primary email address is mafudge@syr.edu but I also use mafudge@gmail.com and snookybear4182@aol.com from time to time."
url = f'{endpoint}text/analytics/v3.0-preview.1/entities/recognition/general'
header = { 'Ocp-Apim-Subscription-Key' : key}
documents = {"documents": [
{"id": "1", "text": text }
]
}
response = requests.post(url,headers=header, json=documents)
entities = response.json()
for entity in entities['documents'][0]['entities']:
if entity['type'] == 'Email':
print( entity['text'])
Text Analytics technologies such as named entity recoginition, key phrase extraction and sentiment analysis are best used when combined with another service. For example:
Write a program to extract key phrases from the three reviews provided. Make 1 api call to the url endpoint have been provided for you. It's up to you to print out the key phrases!
In [ ]:
review1 = "I don't think I will ever order the eggs again. Not very good."
review2 = "Went there last Wednesday. It was croweded and the pancakes and eggs were spot on! I enjoyed my meal."
review3 = "Not sure who is running the place but the eggs benedict were not that great. On the other hand I was happy with my toast."
url = f'{endpoint}text/analytics/v3.0-preview.1/keyphrases'
header = { 'Ocp-Apim-Subscription-Key' : key}
documents = {"documents": [
{"id": "1", "text": review1 },
{"id": "2", "text": review2 },
{"id": "3", "text": review3 }
]
}
# TODO Write your code here to call the API, deserilaize the json, and display the output
From our key phrases analysis, it looks like customers are talking about eggs but are they speaking positively or negatively about their egg experiences? This is why sentiment analysis accompanies key phrase analysis. Key phrase identifies what they are talking about, and sentiment provides the context around it.
Perform sentiment analysis over the reviews to determine who likes the eggs and who does not.
In [ ]:
review1 = "I don't think I will ever order the eggs again. Not very good."
review2 = "Went there last Wednesday. It was croweded and the pancakes and eggs were spot on! I enjoyed my meal."
review3 = "Not sure who is running the place but the eggs benedict were not that great. On the other hand I was happy with my toast."
url = f'{endpoint}text/analytics/v3.0-preview.1/sentiment'
header = { 'Ocp-Apim-Subscription-Key' : key}
# TODO: Write code here to build the documents structure then perform the sentiment analysis via the api
Please answer the following questions. This should be a personal narrative, in your own voice. Answer the questions by double clicking on the question and placing your answer next to the Answer: prompt.
Answer:
Answer:
Answer:
1 ==> I can do this on my own and explain how to do it.
2 ==> I can do this on my own without any help.
3 ==> I can do this with help or guidance from others. If you choose this level please list those who helped you.
4 ==> I don't understand this at all yet and need extra help. If you choose this please try to articulate that which you do not understand.
Answer:
In [ ]:
# SAVE YOUR WORK FIRST! CTRL+S
# RUN THIS CODE CELL TO TURN IN YOUR WORK!
from ist256.submission import Submission
Submission().submit()