In [1]:
from watsongraph.conceptmodel import ConceptModel
import requests
import random
import time
In [3]:
py = ConceptModel(['Python'])
py.concepts()
Out[3]:
Now I am going to explode the and see what it does...
In [4]:
py.explode()
len(py.concepts())
Out[4]:
The next operation explores the edges of the concept
In [5]:
py.edges()
Out[5]:
The Concept Insights API returns its results in order of their relevance to the one at hand, but between the size of Wikipedia and the depth IBM Watson's own cognitive understanding this can result in unmanagably thousands of articles. To keep the information firehose at a managable level the Concept Insights service two parameters which are passed through by watsongraph graph-expansion methods: limit: The maximum number of concepts to be returned. Can be any int. Throttled to 50 by default. level: The popularity threshold of the articles that will be returned, on a 0 (highest) to 5 (lowest) scale. Throttled to 0 by default. The basic explode() command is a level-0 query. What happens when we play with the levels a bit?
In [7]:
database = ConceptModel(['Database'])
database.explode(limit=2000, level=1)
len(database.concepts())
Out[7]:
In [8]:
database.edges()[:20]
Out[8]:
In [11]:
py.edges()[:20]
Out[11]:
In [12]:
py.augment('Standard Library')
len(py.concepts())
Out[12]:
In [13]:
py.neighborhood('Standard Library')
Out[13]:
Lets Start over with Python shall we? Yes we shall...
In [14]:
py.abridge('Standard Library')
len(py.concepts())
Out[14]:
In [15]:
py.abridge('Python')
len(py.concepts())
Out[15]:
In [16]:
py.abridge('Database')
len(py.concepts())
Out[16]:
In [ ]:
py.remove()