spaCy Tutorial

(C) 2019-2020 by Damir Cavar

Version: 1.4, February 2020

Download: This and various other Jupyter notebooks are available from my GitHub repo.

This is a tutorial related to the L665 course on Machine Learning for NLP focusing on Deep Learning, Spring 2018 at Indiana University. The following tutorial assumes that you are using a newer distribution of Python 3 and spaCy 2.2 or newer.

Introduction to spaCy

Follow the instructions on the spaCy homepage about installation of the module and language models. Your local spaCy module is correctly installed, if the following command is successfull:


In [1]:
import spacy

We can load the English NLP pipeline in the following way:


In [3]:
nlp = spacy.load("en_core_web_sm")

Tokenization


In [4]:
doc = nlp(u'Human ambition is the key to staying ahead of automation.')
for token in doc:
    print(token.text)


Human
ambition
is
the
key
to
staying
ahead
of
automation
.

Part-of-Speech Tagging

We can tokenize and part of speech tag the individual tokens using the following code:


In [8]:
doc = nlp(u'John bought a car and Mary a motorcycle.')

for token in doc:
    print("\t".join( (token.text, str(token.idx), token.lemma_, token.pos_, token.tag_, token.dep_,
          token.shape_, str(token.is_alpha), str(token.is_stop) )))


John	0	John	PROPN	NNP	nsubj	Xxxx	True	False
bought	5	buy	VERB	VBD	ROOT	xxxx	True	False
a	12	a	DET	DT	det	x	True	True
car	14	car	NOUN	NN	dobj	xxx	True	False
and	18	and	CCONJ	CC	cc	xxx	True	True
Mary	22	Mary	PROPN	NNP	nsubj	Xxxx	True	False
a	27	a	DET	DT	det	x	True	True
motorcycle	29	motorcycle	NOUN	NN	conj	xxxx	True	False
.	39	.	PUNCT	.	punct	.	False	False

The above output contains for every token in a line the token itself, the lemma, the Part-of-Speech tag, the dependency label, the orthographic shape (upper and lower case characters as X or x respectively), the boolean for the token being an alphanumeric string, and the boolean for it being a stopword.

Dependency Parse

Using the same approach as above for PoS-tags, we can print the Dependency Parse relations:


In [9]:
for token in doc:
    print(token.text, token.dep_, token.head.text, token.head.pos_,
          [child for child in token.children])


John nsubj bought VERB []
bought ROOT bought VERB [John, car, .]
a det car NOUN []
car dobj bought VERB [a, and, motorcycle]
and cc car NOUN []
Mary nsubj motorcycle NOUN []
a det motorcycle NOUN []
motorcycle conj car NOUN [Mary, a]
. punct bought VERB []

As specified in the code, each line represents one token. The token is printed in the first column, followed by the dependency relation to it from the token in the third column, followed by its main category type.

Named Entity Recognition

Similarly to PoS-tags and Dependency Parse Relations, we can print out Named Entity labels:


In [10]:
for ent in doc.ents:
    print(ent.text, ent.start_char, ent.end_char, ent.label_)


John 0 4 PERSON
Mary 22 26 PERSON

We can extend the input with some more entities:


In [14]:
doc = nlp(u'Ali Hassan Kuban said that Apple Inc. will buy Google in May 2018.')

The corresponding NE-labels are:


In [15]:
for ent in doc.ents:
    print(ent.text, ent.start_char, ent.end_char, ent.label_)


Ali Hassan Kuban 0 16 PERSON
Apple Inc. 27 37 ORG
Google 47 53 ORG
May 2018 57 65 DATE

Pattern Matching in spaCy


In [27]:
from spacy.matcher import Matcher

matcher = Matcher(nlp.vocab)
pattern = [{'LOWER': 'hello'}, {'IS_PUNCT': True}, {'LOWER': 'world'}]
matcher.add('HelloWorld', None, pattern)

doc = nlp(u'Hello, world! Hello... world!')
matches = matcher(doc)
for match_id, start, end in matches:
    string_id = nlp.vocab.strings[match_id]  # Get string representation
    span = doc[start:end]  # The matched span
    print(match_id, string_id, start, end, span.text)
print("-" * 50)
doc = nlp(u'Hello, world! Hello world!')
matches = matcher(doc)
for match_id, start, end in matches:
    string_id = nlp.vocab.strings[match_id]  # Get string representation
    span = doc[start:end]  # The matched span
    print(match_id, string_id, start, end, span.text)


15578876784678163569 HelloWorld 0 3 Hello, world
15578876784678163569 HelloWorld 4 7 Hello... world
--------------------------------------------------
15578876784678163569 HelloWorld 0 3 Hello, world

spaCy is Missing

From the linguistic standpoint, when looking at the analytical output of the NLP pipeline in spaCy, there are some important components missing:

  • Clause boundary detection
  • Constituent structure trees (scope relations over constituents and phrases)
  • Anaphora resolution
  • Coreference analysis
  • Temporal reference resolution
  • ...

Clause Boundary Detection

Complex sentences consist of clauses. For precise processing of semantic properties of natural language utterances we need to segment the sentences into clauses. The following sentence:

The man said that the woman claimed that the child broke the toy.

can be broken into the following clauses:

  • Matrix clause: [ the man said ]
  • Embedded clause: [ that the woman claimed ]
  • Embedded clause: [ that the child broke the toy ]

These clauses do not form an ordered list or flat sequence, they in fact are hierarchically organized. The matrix clause verb selects as its complement an embedded finite clause with the complementizer that. The embedded predicate claimed selects the same kind of clausal complement. We express this hierarchical relation in form of embedding in tree representations:

[ the man said [ that the woman claimed [ that the child broke the toy ] ] ]

Or using a graphical representation in form of a tree:

The hierarchical relation of sub-clauses is relevant when it comes to semantics. The clause John sold his car can be interpreted as an assertion that describes an event with John as the agent, and the car as the object of a selling event in the past. If the clause is embedded under a matrix clause that contains a sentential negation, the proposition is assumed to NOT be true: [ Mary did not say that [ John sold his car ] ]

It is possible with additional effort to translate the Dependency Trees into clauses and reconstruct the clause hierarchy into a relevant form or data structure. SpaCy does not offer a direct data output of such relations.

One problem still remains, and this is clausal discontinuities. None of the common NLP pipelines, and spaCy in particular, can deal with any kind of discontinuities in any reasonable way. Discontinuities can be observed when sytanctic structures are split over the clause or sentence, or elements ocur in a cannoically different position, as in the following example:

Which car did John claim that Mary took?

The embedded clause consists of the sequence [ Mary took which car ]. One part of the sequence appears dislocated and precedes the matrix clause in the above example. Simple Dependency Parsers cannot generate any reasonable output that makes it easy to identify and reconstruct the relations of clausal elements in these structures.

Constitutent Structure Trees

Dependency Parse trees are a simplification of relations of elements in the clause. They ignore structural and hierarchical relations in a sentence or clause, as shown in the examples above. Instead the Dependency Parse trees show simple functional relations in the sense of sentential functions like subject or object of a verb.

SpaCy does not output any kind of constituent structure and more detailed relational properties of phrases and more complex structural units in a sentence or clause.

Since many semantic properties are defined or determined in terms of structural relations and hierarchies, that is scope relations, this is more complicated to reconstruct or map from the Dependency Parse trees.

Anaphora Resolution

SpaCy does not offer any anaphora resolution annotation. That is, the referent of a pronoun, as in the following examples, is not annotated in the resulting linguistic data structure:

  • John saw him.
  • John said that he saw the house.
  • Tim sold his house. He moved to Paris.
  • John saw himself in the mirror.

Knowing the restrictions of pronominal binding (in English for example), we can partially generate the potential or most likely anaphora - antecedent relations. This - however - is not part of the spaCy output.

One problem, however, is that spaCy does not provide parse trees of the constituent structure and clausal hierarchies, which is crucial for the correct analysis of pronominal anaphoric relations.

Coreference Analysis

Some NLP pipelines are capable of providing coreference analyses for constituents in clauses. For example, the two clauses should be analyzed as talking about the same subject:

The CEO of Apple, Tim Cook, decided to apply for a job at Google. Cook said that he is not satisfied with the quality of the iPhones anymore. He prefers the Pixel 2.

The constituents [ the CEO of Apple, Tim Cook ] in the first sentence, [ Cook ] in the second sentence, and [ he ] in the third, should all be tagged as referencing the same entity, that is the one mentioned in the first sentence. SpaCy does not provide such a level of analysis or annotation.

Temporal Reference

For various analysis levels it is essential to identify the time references in a sentence or utterance, for example the time the utterance is made or the time the described event happened.

Certain tenses are expressed as periphrastic constructions, including auxiliaries and main verbs. SpaCy does not provide the relevant information to identify these constructions and tenses.

Using the Dependency Parse Visualizer

More on Dependency Parse trees


In [16]:
import spacy

We can load the visualizer:


In [17]:
from spacy import displacy

Loading the English NLP pipeline:


In [18]:
nlp = spacy.load("en_core_web_sm")

Process an input sentence:


In [27]:
#doc = nlp(u'John said yesterday that Mary bought a new car for her older son.')
#doc = nlp(u"Dick ran and Jane danced yesterday.")
#doc = nlp(u"Tim Cook is the CEO of Apple.")
#doc = nlp(u"Born in a small town, she took the midnight train going anywhere.")
doc = nlp(u"John met Peter and Susan called Paul.")

If you want to generate a visualization running code outside of the Jupyter notebook, you could use the following code. You should not use this code, if you are running the notebook. Instead, use the function display.render two cells below.

Visualizing the Dependency Parse tree can be achieved by running the following server code and opening up a new tab on the URL http://localhost:5000/. You can shut down the server by clicking on the stop button at the top in the notebook toolbar.


In [ ]:
displacy.serve(doc, style='dep')

Instead of serving the graph, one can render it directly into a Jupyter Notebook:


In [28]:
displacy.render(doc, style='dep', jupyter=True, options={"distance": 120})


John PROPN met VERB Peter PROPN and CCONJ Susan PROPN called VERB Paul. PROPN nsubj nsubj cc conj ccomp oprd

In addition to the visualization of the Dependency Trees, we can visualize named entity annotations:


In [24]:
text = """Apple decided to fire Tim Cook and hire somebody called John Doe as the new CEO.
They also discussed a merger with Google. On the long run it seems more likely that Apple
will merge with Amazon and Microsoft with Google. The companies will all relocate to
Austin in Texas before the end of the century. John Doe bought a Prosche."""

doc = nlp(text)
displacy.render(doc, style='ent', jupyter=True)


Apple ORG decided to fire Tim Cook PERSON and hire somebody called John Doe PERSON as the new CEO.They also discussed a merger with Google ORG . On the long run it seems more likely that Apple ORG will merge with Amazon ORG and Microsoft ORG with Google ORG . The companies will all relocate to Austin PERSON in Texas GPE before the end of the century DATE . John Doe PERSON bought a Prosche ORG .

Vectors

To use vectors in spaCy, you might consider installing the larger models for the particular language. The common module and language packages only come with the small models. The larger models can be installed as described on the spaCy vectors page:

python -m spacy download en_core_web_lg

The large model en_core_web_lg contains more than 1 million unique vectors.

Let us restart all necessary modules again, in particular spaCy:


In [25]:
import spacy

We can now import the English NLP pipeline to process some word list. Since the small models in spacy only include context-sensitive tensors, we should use the dowloaded large model for better word vectors. We load the large model as follows:


In [26]:
nlp = spacy.load('en_core_web_lg')
#nlp = spacy.load("en_core_web_sm")

We can process a list of words by the pipeline using the nlp object:


In [29]:
tokens = nlp(u'dog poodle beagle cat banana apple')

As described in the spaCy chapter Word Vectors and Semantic Similarity, the resulting elements of Doc, Span, and Token provide a method similarity(), which returns the similarities between words:


In [30]:
for token1 in tokens:
    for token2 in tokens:
        print(token1, token2, token1.similarity(token2))


dog dog 1.0
dog poodle 0.67507446
dog beagle 0.6659215
dog cat 0.80168545
dog banana 0.24327643
dog apple 0.26339024
poodle dog 0.67507446
poodle poodle 1.0
poodle beagle 0.7116687
poodle cat 0.573045
poodle banana 0.22574891
poodle apple 0.19250016
beagle dog 0.6659215
beagle poodle 0.7116687
beagle beagle 1.0
beagle cat 0.5562765
beagle banana 0.17828682
beagle apple 0.21266584
cat dog 0.80168545
cat poodle 0.573045
cat beagle 0.5562765
cat cat 1.0
cat banana 0.28154364
cat apple 0.28213844
banana dog 0.24327643
banana poodle 0.22574891
banana beagle 0.17828682
banana cat 0.28154364
banana banana 1.0
banana apple 0.5831845
apple dog 0.26339024
apple poodle 0.19250016
apple beagle 0.21266584
apple cat 0.28213844
apple banana 0.5831845
apple apple 1.0

We can access the vectors of these objects using the vector attribute:


In [41]:
tokens = nlp(u'dog cat banana sasquatch')

for token in tokens:
    print(token.text, token.has_vector, token.vector_norm, token.is_oov)


dog True 7.0336733 False
cat True 6.6808186 False
banana True 6.700014 False
sasquatch True 6.9789977 False
Damir True 9.316109 False

The attribute has_vector returns a boolean depending on whether the token has a vector in the model or not. The token sasquatch has no vector. It is also out-of-vocabulary (OOV), as the fourth column shows. Thus, it also has a norm of $0$, that is, it has a length of $0$.

Here the token vector has a length of $300$. We can print out the vector for a token:


In [31]:
n = 0
print(tokens[n].text, len(tokens[n].vector), tokens[n].vector)


dog 300 [-4.0176e-01  3.7057e-01  2.1281e-02 -3.4125e-01  4.9538e-02  2.9440e-01
 -1.7376e-01 -2.7982e-01  6.7622e-02  2.1693e+00 -6.2691e-01  2.9106e-01
 -6.7270e-01  2.3319e-01 -3.4264e-01  1.8311e-01  5.0226e-01  1.0689e+00
  1.4698e-01 -4.5230e-01 -4.1827e-01 -1.5967e-01  2.6748e-01 -4.8867e-01
  3.6462e-01 -4.3403e-02 -2.4474e-01 -4.1752e-01  8.9088e-02 -2.5552e-01
 -5.5695e-01  1.2243e-01 -8.3526e-02  5.5095e-01  3.6410e-01  1.5361e-01
  5.5738e-01 -9.0702e-01 -4.9098e-02  3.8580e-01  3.8000e-01  1.4425e-01
 -2.7221e-01 -3.7016e-01 -1.2904e-01 -1.5085e-01 -3.8076e-01  4.9583e-02
  1.2755e-01 -8.2788e-02  1.4339e-01  3.2537e-01  2.7226e-01  4.3632e-01
 -3.1769e-01  7.9405e-01  2.6529e-01  1.0135e-01 -3.3279e-01  4.3117e-01
  1.6687e-01  1.0729e-01  8.9418e-02  2.8635e-01  4.0117e-01 -3.9222e-01
  4.5217e-01  1.3521e-01 -2.8878e-01 -2.2819e-02 -3.4975e-01 -2.2996e-01
  2.0224e-01 -2.1177e-01  2.7184e-01  9.1703e-02 -2.0610e-01 -6.5758e-01
  1.8949e-01 -2.6756e-01  9.2639e-02  4.3316e-01 -4.8868e-01 -3.8309e-01
 -2.1910e-01 -4.4183e-01  9.8044e-01  6.7423e-01  8.4003e-01 -1.8169e-01
  1.7385e-01  4.1848e-01  1.6098e-01 -1.0490e-01 -4.1965e-01 -3.5660e-01
 -1.6837e-01 -6.3458e-01  3.8422e-01 -3.5043e-01  1.7486e-01  5.3528e-01
  2.0143e-01  3.7877e-02  4.7105e-01 -4.4344e-01  1.6840e-01 -1.6685e-01
 -2.4022e-01 -1.0077e-01  3.0334e-01  4.2730e-01  3.3803e-01 -4.3481e-01
  1.1343e-01  6.1958e-02  6.1808e-02 -1.4007e-01  8.2018e-02 -3.9130e-02
  5.1442e-02  2.8725e-01  5.8025e-01 -5.7641e-01 -3.4652e-01  1.0132e-01
  1.4463e-01  1.1569e-02 -3.3701e-01 -1.7586e-01 -3.5724e-01 -2.1423e-01
  1.1429e-02  4.7645e-01 -3.7463e-02 -2.9488e-01 -1.7465e-01  3.0255e-01
  6.0317e-01 -6.6790e-02 -2.7050e+00 -7.0308e-01  4.0548e-01  6.2874e-01
  6.3080e-01 -5.4513e-01 -9.6191e-03  2.6533e-01  2.3391e-01 -5.1886e-02
 -6.5759e-03  1.8573e-02 -4.5693e-01 -7.0351e-02 -3.0621e-01 -1.4018e-02
 -2.0408e-01  3.7100e-01 -3.2354e-01 -8.4646e-01  2.7092e-01 -1.1961e-01
 -9.5576e-02 -6.0464e-01  4.2409e-02  2.4656e-01  3.8445e-02 -2.5467e-02
 -9.2908e-02 -2.1356e-01  3.6120e-01  1.9113e-02  6.2741e-02 -1.3083e-01
 -1.5146e-03  5.8238e-01 -1.8956e-01  7.8105e-01  1.0477e-02  1.0928e+00
  1.0140e-01 -3.6248e-01 -1.1962e-01 -3.4462e-01 -5.5704e-01  2.5797e-01
  3.3356e-01  3.3194e-01 -3.1298e-01 -7.5547e-01 -7.5290e-01 -9.3072e-02
 -1.1173e-01 -5.7251e-01  1.6639e-01  6.3579e-01  2.4006e-01 -2.9211e-01
  9.0182e-01  1.2425e-01 -5.7751e-01  4.7986e-02 -4.2748e-01  2.4446e-01
  4.7232e-02  3.5694e-01  4.4241e-01 -2.3055e-01  6.6037e-01 -7.3983e-03
 -3.7857e-01  2.2759e-01 -3.7138e-01  3.1055e-01 -7.2105e-02 -2.4490e-01
 -3.9761e-02  5.3650e-01 -4.1478e-01  1.6563e-01  3.3707e-01  1.0920e-01
  3.7219e-01 -5.5727e-01 -7.8060e-01  1.4251e-01 -3.5828e-01  4.1638e-01
  2.1446e-01  1.8410e-01 -4.7704e-01 -2.2005e-02 -2.3634e-01 -2.2840e-01
  3.4722e-01  2.3667e-01  7.4249e-02 -8.8416e-02  2.8618e-01 -4.6942e-01
 -4.3914e-01 -2.6474e-01 -3.0690e-01 -1.5260e-01 -8.4870e-02  2.8410e-01
 -1.8481e-01 -2.2122e-01 -1.1169e-01 -2.5241e-02  4.5968e-02  3.5343e-02
  2.2467e-01  5.1556e-01 -6.5137e-04  9.9559e-02 -1.4215e-01  2.0136e-01
  2.8334e-01 -2.8772e-01  3.7766e-02 -3.7608e-01 -1.1681e-01 -6.7020e-01
 -4.6265e-02  3.8784e-01 -3.2295e-02 -5.4291e-02 -4.5384e-01  1.9552e-01
 -2.9470e-01  8.5009e-01  1.0345e-01  9.7010e-02  1.1339e-01  3.9502e-01
  5.9043e-02  2.1978e-01  1.8845e-01 -1.5891e-01 -1.0301e-01  3.3164e-01
  6.1477e-02 -2.9848e-01  4.4510e-01  4.7329e-01  2.6312e-01 -1.8495e-01
  1.4652e-01 -3.1510e-02  2.2908e-02 -2.5929e-01 -3.0862e-01  1.7545e-03
 -1.8962e-01  5.4789e-01  3.1194e-01  2.4693e-01  2.9929e-01 -7.4861e-02]

Here just another example of similarities for some famous words:


In [8]:
tokens = nlp(u'queen king chef')

for token1 in tokens:
    for token2 in tokens:
        print(token1, token2, token1.similarity(token2))


queen queen 1.0
queen king 0.72526103
queen chef 0.24236034
king queen 0.72526103
king king 1.0
king chef 0.2525855
chef queen 0.24236034
chef king 0.2525855
chef chef 1.0

Similarities in Context

In spaCy parsing, tagging and NER models make use of vector representations of contexts that represent the meaning of words. A text meaning representation is represented as an array of floats, i.e. a tensor, computed during the NLP pipeline processing. With this approach words that have not been seen before can be typed or classified. SpaCy uses a 4-layer convolutional network for the computation of these tensors. In this approach these tensors model a context of four words left and right of any given word.

Let us use the example from the spaCy documentation and check the word labrador:


In [9]:
tokens = nlp(u'labrador')

for token in tokens:
    print(token.text, token.has_vector, token.vector_norm, token.is_oov)


labrador True 6.850418 False

We can now test for the context:


In [10]:
doc1 = nlp(u"The labrador barked.")
doc2 = nlp(u"The labrador swam.")
doc3 = nlp(u"the labrador people live in canada.")

dog = nlp(u"dog")

count = 0
for doc in [doc1, doc2, doc3]:
    lab = doc[1]
    count += 1
    print(str(count) + ":", lab.similarity(dog))


1: 0.6349026162074578
2: 0.6349026162074578
3: 0.6349026162074578

Using this strategy we can compute document or text similarities as well:


In [11]:
docs = ( nlp(u"Paris is the largest city in France."),
        nlp(u"Vilnius is the capital of Lithuania."),
        nlp(u"An emu is a large bird.") )

for x in range(len(docs)):
    for y in range(len(docs)):
        print(x, y, docs[x].similarity(docs[y]))


0 0 1.0
0 1 0.7554966079333336
0 2 0.6921462554756126
1 0 0.7554966079333336
1 1 1.0
1 2 0.5668025741640493
2 0 0.6921462554756126
2 1 0.5668025741640493
2 2 1.0

We can vary the word order in sentences and compare them:


In [12]:
docs = [nlp(u"dog bites man"), nlp(u"man bites dog"),
        nlp(u"man dog bites"), nlp(u"cat eats mouse")]

for doc in docs:
    for other_doc in docs:
        print('"' + doc.text + '"', '"' + other_doc.text + '"', doc.similarity(other_doc))


"dog bites man" "dog bites man" 1.0
"dog bites man" "man bites dog" 0.9999999711588186
"dog bites man" "man dog bites" 0.9999999726761114
"dog bites man" "cat eats mouse" 0.7096953502122842
"man bites dog" "dog bites man" 0.9999999711588186
"man bites dog" "man bites dog" 1.0
"man bites dog" "man dog bites" 0.999999971568008
"man bites dog" "cat eats mouse" 0.7096953494258684
"man dog bites" "dog bites man" 0.9999999726761114
"man dog bites" "man bites dog" 0.999999971568008
"man dog bites" "man dog bites" 1.0
"man dog bites" "cat eats mouse" 0.7096953505026842
"cat eats mouse" "dog bites man" 0.7096953502122842
"cat eats mouse" "man bites dog" 0.7096953494258684
"cat eats mouse" "man dog bites" 0.7096953505026842
"cat eats mouse" "cat eats mouse" 1.0

Custom Models

Optimization


In [ ]:
nlp = spacy.load('en_core_web_lg')

Training Models

This example code for training an NER model is based on the training example in spaCy.

We will import some components from the future module. Read its documentation here.


In [25]:
from __future__ import unicode_literals, print_function

We import the random module for pseudo-random number generation:


In [26]:
import random

We import the Path object from the pathlib module:


In [27]:
from pathlib import Path

We import spaCy:


In [28]:
import spacy

We also import the minibatch and compounding module from spaCy.utils:


In [29]:
from spacy.util import minibatch, compounding

The training data is formated as JSON:


In [30]:
TRAIN_DATA = [
    ("Who is Shaka Khan?", {"entities": [(7, 17, "PERSON")]}),
    ("I like London and Berlin.", {"entities": [(7, 13, "LOC"), (18, 24, "LOC")]}),
]

We created a blank 'xx' model:


In [31]:
nlp = spacy.blank("xx")  # create blank Language class
ner = nlp.create_pipe("ner")
nlp.add_pipe(ner, last=True)

We add the named entity labels to the NER model:


In [32]:
for _, annotations in TRAIN_DATA:
    for ent in annotations.get("entities"):
        ner.add_label(ent[2])

Assuming that the model is empty and untrained, we reset and initialize the weights randomly using:


In [33]:
nlp.begin_training()


Out[33]:
<thinc.neural.optimizers.Optimizer at 0x195b8bd0e08>

We would not do this, if the model is supposed to be tuned or retrained on new data.

We get all pipe-names in the model that are not our NER related pipes to disable them during training:


In [34]:
pipe_exceptions = ["ner", "trf_wordpiecer", "trf_tok2vec"]
other_pipes = [pipe for pipe in nlp.pipe_names if pipe not in pipe_exceptions]

We can now disable the other pipes and train just the NER uing 100 iterations:


In [35]:
with nlp.disable_pipes(*other_pipes):  # only train NER
    for itn in range(100):
        random.shuffle(TRAIN_DATA)
        losses = {}
        # batch up the examples using spaCy's minibatch
        batches = minibatch(TRAIN_DATA, size=compounding(4.0, 32.0, 1.001))
        for batch in batches:
            texts, annotations = zip(*batch)
            nlp.update(
                texts,  # batch of texts
                annotations,  # batch of annotations
                drop=0.5,  # dropout - make it harder to memorise data
                losses=losses,
            )
        print("Losses", losses)


Losses {'ner': 9.899998903274536}
Losses {'ner': 9.771130681037903}
Losses {'ner': 9.662875056266785}
Losses {'ner': 9.284787654876709}
Losses {'ner': 8.981807231903076}
Losses {'ner': 8.70932126045227}
Losses {'ner': 8.161730885505676}
Losses {'ner': 7.948694348335266}
Losses {'ner': 7.611189246177673}
Losses {'ner': 7.288479924201965}
Losses {'ner': 6.590780735015869}
Losses {'ner': 6.2185728549957275}
Losses {'ner': 5.764992654323578}
Losses {'ner': 5.522161483764648}
Losses {'ner': 5.397007465362549}
Losses {'ner': 5.046939134597778}
Losses {'ner': 5.604821681976318}
Losses {'ner': 4.739657759666443}
Losses {'ner': 3.924922004342079}
Losses {'ner': 4.56282514333725}
Losses {'ner': 3.925854090601206}
Losses {'ner': 3.3651576302945614}
Losses {'ner': 5.091102749109268}
Losses {'ner': 3.7689528055489063}
Losses {'ner': 4.273776765912771}
Losses {'ner': 4.636930052191019}
Losses {'ner': 4.757026303559542}
Losses {'ner': 4.5286380760371685}
Losses {'ner': 4.163674160838127}
Losses {'ner': 3.9889527708292007}
Losses {'ner': 3.0439892262220383}
Losses {'ner': 3.055849850177765}
Losses {'ner': 3.575008988380432}
Losses {'ner': 2.545515540987253}
Losses {'ner': 2.715600423514843}
Losses {'ner': 2.1836754083633423}
Losses {'ner': 4.109894420951605}
Losses {'ner': 1.5611107051372528}
Losses {'ner': 3.892983704805374}
Losses {'ner': 3.62281196936965}
Losses {'ner': 1.8465841109864414}
Losses {'ner': 3.0392874367535114}
Losses {'ner': 3.236401729285717}
Losses {'ner': 1.4246960915625095}
Losses {'ner': 2.6635809030849487}
Losses {'ner': 2.5492967040045187}
Losses {'ner': 2.128100039437413}
Losses {'ner': 2.0054658054723404}
Losses {'ner': 1.5138008354697376}
Losses {'ner': 2.26066195522435}
Losses {'ner': 1.7702724537812173}
Losses {'ner': 1.0542208638507873}
Losses {'ner': 1.6254844756331295}
Losses {'ner': 1.1217406265204772}
Losses {'ner': 1.762260497547686}
Losses {'ner': 1.280116155743599}
Losses {'ner': 1.0819223700091243}
Losses {'ner': 1.1411586576141417}
Losses {'ner': 0.7897612750530243}
Losses {'ner': 1.1978556619969822}
Losses {'ner': 1.0774077119422145}
Losses {'ner': 0.524103322357405}
Losses {'ner': 0.14277109322574688}
Losses {'ner': 0.2241282734539709}
Losses {'ner': 0.34636850090464577}
Losses {'ner': 0.08223805215675384}
Losses {'ner': 0.05549280589184491}
Losses {'ner': 0.0024851516220678604}
Losses {'ner': 0.05691710543760564}
Losses {'ner': 1.0407492580460769}
Losses {'ner': 0.0526981148861978}
Losses {'ner': 1.1232850731925055}
Losses {'ner': 0.00953761925939034}
Losses {'ner': 0.005828056969903628}
Losses {'ner': 0.15594270042975822}
Losses {'ner': 0.06504033425798639}
Losses {'ner': 0.003363491304071431}
Losses {'ner': 0.0003446241275923967}
Losses {'ner': 0.00012553760564060212}
Losses {'ner': 0.14084735384085434}
Losses {'ner': 0.0039026450327241946}
Losses {'ner': 0.00024963428050739367}
Losses {'ner': 0.00042513598606319647}
Losses {'ner': 0.01237983034456902}
Losses {'ner': 2.480665149295902e-06}
Losses {'ner': 5.232076421825618e-05}
Losses {'ner': 5.5753141904174575e-06}
Losses {'ner': 9.668284166696253e-06}
Losses {'ner': 8.882167244483874e-07}
Losses {'ner': 4.3150865385857204e-07}
Losses {'ner': 0.009391245102135057}
Losses {'ner': 6.893534022518875e-05}
Losses {'ner': 5.031418208979732e-05}
Losses {'ner': 6.117600574990499e-08}
Losses {'ner': 0.005182308305785062}
Losses {'ner': 2.9338899101630234e-05}
Losses {'ner': 3.6279415018269923e-06}
Losses {'ner': 1.0100381777123398e-06}
Losses {'ner': 2.8042602515369885e-05}
Losses {'ner': 6.13335453238987e-05}

We can test the trained model:


In [36]:
for text, _ in TRAIN_DATA:
    doc = nlp(text)
    print("Entities", [(ent.text, ent.label_) for ent in doc.ents])
    print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc])


Entities [('London', 'LOC'), ('Berlin', 'LOC')]
Tokens [('I', '', 2), ('like', '', 2), ('London', 'LOC', 3), ('and', '', 2), ('Berlin', 'LOC', 3), ('.', '', 2)]
Entities [('Shaka Khan', 'PERSON')]
Tokens [('Who', '', 2), ('is', '', 2), ('Shaka', 'PERSON', 3), ('Khan', 'PERSON', 1), ('?', '', 2)]

We can define the output directory where the model will be saved as the models folder in the directory where the notebook is running:


In [37]:
output_dir = Path("./models/")

Save model to output dir:


In [38]:
if not output_dir.exists():
    output_dir.mkdir()
nlp.to_disk(output_dir)

To make sure everything worked out well, we can test the saved model:


In [39]:
nlp2 = spacy.load(output_dir)
for text, _ in TRAIN_DATA:
    doc = nlp2(text)
    print("Entities", [(ent.text, ent.label_) for ent in doc.ents])
    print("Tokens", [(t.text, t.ent_type_, t.ent_iob) for t in doc])


Entities [('London', 'LOC'), ('Berlin', 'LOC')]
Tokens [('I', '', 2), ('like', '', 2), ('London', 'LOC', 3), ('and', '', 2), ('Berlin', 'LOC', 3), ('.', '', 2)]
Entities [('Shaka Khan', 'PERSON')]
Tokens [('Who', '', 2), ('is', '', 2), ('Shaka', 'PERSON', 3), ('Khan', 'PERSON', 1), ('?', '', 2)]