Seminar 6: Recurrent Neural Networks and Natural Language Processing

Encoding and Decoding Sequences

Unfolding a RNN in time

Multiple Layers in Recurrent Networks

Implementing Our First Recurrent Network

TensorFlow supports various variants of RNNs that can be found in the tf.nn.rnn_cell module. With the tf.nn.dynamic_rnn() operation, TensorFlow also implements the RNN dynamics for us. There is also a version of this function that adds the unfolded operations to the graph instead of using a loop. However, this consumes considerably more memory and has no real benefits. We therefore use the newer dynamic_rnn() operation. As parameters, dynamic_rnn() takes a recurrent network definition and the batch of input sequences. For now, the sequences all have the same length. The function creates the required computations for the RNN to the compute graph and returns two tensors holding the outputs and hidden states at each time step.


In [1]:
import tensorflow as tf

# The input data has dimensions batch_size * sequence_length * frame_size.
# To not restrict ourselves to a fixed batch size, we use None as size of
# the first dimension.
sequence_length = 1440
frame_size = 10
data = tf.placeholder(tf.float32, [None, sequence_length, frame_size])

num_neurons = 200
network = tf.contrib.rnn.BasicRNNCell(num_neurons)
# Define the operations to simulate the RNN for sequence_length steps.
outputs, states = tf.nn.dynamic_rnn(network, data, dtype=tf.float32)

Sequence Classification


In [2]:
import tarfile
import re
import urllib.request
import os
import random

class ImdbMovieReviews:
    """
    The movie review dataset is offered by Stanford University’s AI department:
    http://ai.stanford.edu/~amaas/data/sentiment/. It comes as a compressed  tar  archive where
    positive and negative reviews can be found as text files in two according folders. We apply
    the same pre-processing to the text as in the last section: Extracting plain words using a
    regular expression and converting to lower case.
    """
    DEFAULT_URL = \
        'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
    TOKEN_REGEX = re.compile(r'[A-Za-z]+|[!?.:,()]')
    
    def __init__(self):
        self._cache_dir = 'imdb'
        self._url = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
        
        if not os.path.isfile(self._cache_dir):
            urllib.request.urlretrieve(self._url, self._cache_dir)
        self.filepath = self._cache_dir

    def __iter__(self):
        with tarfile.open(self.filepath) as archive:
            items = archive.getnames()
            for filename in archive.getnames():
                if filename.startswith('aclImdb/train/pos/'):
                    yield self._read(archive, filename), True
                elif filename.startswith('aclImdb/train/neg/'):
                    yield self._read(archive, filename), False
                    
    def _read(self, archive, filename):
        with archive.extractfile(filename) as file_:
            data = file_.read().decode('utf-8')
            data = type(self).TOKEN_REGEX.findall(data)
            data = [x.lower() for x in data]
            return data

The code should be straight forward. We just use the vocabulary to determine the index of a word and use that index to find the right embedding vector. The following class also padds the sequences to the same length so we can easily fit batches of multiple reviews into your network later.


In [3]:
import numpy as np
# Spacy is my favourite nlp framework, which havu builtin word embeddings trains on wikipesia
from spacy.en import English

class Embedding:
    
    def __init__(self, length):
#          spaCy makes using word vectors very easy. 
#             The Lexeme , Token , Span  and Doc  classes all have a .vector property,
#             which is a 1-dimensional numpy array of 32-bit floats:
        self.parser = English()
        self._length = length
        self.dimensions = 300
        
    def __call__(self, sequence):
        data = np.zeros((self._length, self.dimensions))
        # you can access known words from the parser's vocabulary
        embedded = [self.parser.vocab[w].vector for w in sequence]
        data[:len(sequence)] = embedded
        return data

Sequence Labelling Model

We want to classify the sentiment of text sequences. Because this is a supervised setting, we pass two placeholders to the model: one for the input data , or the sequence, and one for the target value, or the sentiment. We also pass in the params object that contains configuration parameters like the size of the recurrent layer, its cell architecture (LSTM, GRU, etc), and the optimizer to use.


In [4]:
from lazy import lazy

class SequenceClassificationModel:
    def __init__(self, data, params):
        self.params = params
        self._create_placeholders()
        self.prediction
        self.cost
        self.error
        self.optimize
        self._create_summaries()
    
    def _create_placeholders(self):
        with tf.name_scope("data"):
            self.data = tf.placeholder(tf.float32, [None, self.params.seq_length, self.params.embed_length])
            self.target = tf.placeholder(tf.float32, [None, 2])
  
    def _create_summaries(self):
        with tf.name_scope("summaries"):
            tf.summary.scalar('loss', self.cost)
            tf.summary.scalar('erroe', self.error)
            self.summary = tf.summary.merge_all()
            saver = tf.train.Saver()
            
    @lazy
    def length(self):
    # First, we obtain the lengths of sequences in the current data batch. We need this since
    # the data comes as a single tensor, padded with zero vectors to the longest review length.
    # Instead of keeping track of the sequence lengths of every review, we just compute it
    # dynamically in TensorFlow.
    
        with tf.name_scope("seq_length"):
            used = tf.sign(tf.reduce_max(tf.abs(self.data), reduction_indices=2))
            length = tf.reduce_sum(used, reduction_indices=1)
            length = tf.cast(length, tf.int32)
        return length
    
    @lazy
    def prediction(self):
    # Note that the last relevant output activation of the RNN has a different index for each
    # sequence in the training batch. This is because each review has a different length. We
    # already know the length of each sequence.
    # The problem is that we want to index in the dimension of time steps, which is
    # the second dimension in the batch of shape  (sequences, time_steps, word_vectors) .
    
        with tf.name_scope("recurrent_layer"):
            output, _ = tf.nn.dynamic_rnn(
                self.params.rnn_cell(self.params.rnn_hidden),
                self.data,
                dtype=tf.float32,
                sequence_length=self.length
            )
        last = self._last_relevant(output, self.length)

        with tf.name_scope("softmax_layer"):
            num_classes = int(self.target.get_shape()[1])
            weight = tf.Variable(tf.truncated_normal(
                [self.params.rnn_hidden, num_classes], stddev=0.01))
            bias = tf.Variable(tf.constant(0.1, shape=[num_classes]))
            prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
        return prediction
    
    @lazy
    def cost(self):
        cross_entropy = -tf.reduce_sum(self.target * tf.log(self.prediction))
        return cross_entropy
    
    @lazy
    def error(self):
        self.mistakes = tf.not_equal(
            tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))
        return tf.reduce_mean(tf.cast(self.mistakes, tf.float32))
    
    @lazy
    def optimize(self):
    # RNNs are quite hard to train and weights tend to diverge if the hyper parameters do not
    # play nicely together. The idea of gradient clipping is to restrict the the values of the
    # gradient to a sensible range. This way, we can limit the maximum weight updates.

        with tf.name_scope("optimization"):
            gradient = self.params.optimizer.compute_gradients(self.cost)
            if self.params.gradient_clipping:
                limit = self.params.gradient_clipping
                gradient = [
                    (tf.clip_by_value(g, -limit, limit), v)
                    if g is not None else (None, v)
                    for g, v in gradient]
            optimize = self.params.optimizer.apply_gradients(gradient)
        return optimize
    
    @staticmethod
    def _last_relevant(output, length):
        with tf.name_scope("last_relevant"):
            # As of now, TensorFlow only supports indexing along the first dimension, using
            # tf.gather() . We thus flatten the first two dimensions of the output activations from their
            # shape of  sequences x time_steps x word_vectors  and construct an index into this resulting tensor.
            batch_size = tf.shape(output)[0]
            max_length = int(output.get_shape()[1])
            output_size = int(output.get_shape()[2])

            # The index takes into account the start indices for each sequence in the flat tensor and adds
            # the sequence length to it. Actually, we only add  length - 1  so that we select the last valid
            # time step.
            index = tf.range(0, batch_size) * max_length + (length - 1)
            flat = tf.reshape(output, [-1, output_size])
            relevant = tf.gather(flat, index)
        return relevant

In [5]:
def preprocess_batched(iterator, length, embedding, batch_size):
    iterator = iter(iterator)
    while True:
        data = np.zeros((batch_size, length, embedding.dimensions))
        target = np.zeros((batch_size, 2))
        for index in range(batch_size):
            text, label = next(iterator)
            data[index] = embedding(text)
            target[index] = [1, 0] if label else [0, 1]
        yield data, target

In [6]:
reviews = list(ImdbMovieReviews())

In [7]:
random.shuffle(reviews)

In [8]:
length = max(len(x[0]) for x in reviews)
embedding = Embedding(length)

In [9]:
from attrdict import AttrDict

params = AttrDict(
    rnn_cell=tf.contrib.rnn.GRUCell,
    rnn_hidden=300,
    optimizer=tf.train.RMSPropOptimizer(0.002),
    batch_size=20,
    gradient_clipping=100,
    seq_length=length,
    embed_length=embedding.dimensions
)

In [10]:
batches = preprocess_batched(reviews, length, embedding, params.batch_size)

In [ ]:
tf.reset_default_graph()

model = SequenceClassificationModel(data, params)


/home/kurbanov/Soft/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:95: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

In [ ]:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    summary_writer = tf.summary.FileWriter('graphs', sess.graph)
    for index, batch in enumerate(batches):
        feed = {model.data: batch[0], model.target: batch[1]}
        error, _, summary_str = sess.run([model.error, model.optimize, model.summary], feed)
        print('{}: {:3.1f}%'.format(index + 1, 100 * error))
        if index % 1 == 0:
            summary_writer.add_summary(summary_str, index)


1: 45.0%
2: 50.0%
3: 25.0%
4: 70.0%
5: 30.0%
6: 40.0%
7: 55.0%
8: 50.0%
9: 40.0%
10: 60.0%
11: 40.0%
12: 40.0%
13: 35.0%
14: 65.0%
15: 50.0%
16: 55.0%
17: 60.0%
18: 45.0%
19: 70.0%
20: 55.0%
21: 55.0%
22: 45.0%
23: 45.0%
24: 60.0%
25: 55.0%
26: 55.0%
27: 50.0%
28: 35.0%
29: 45.0%
30: 35.0%
31: 55.0%
32: 60.0%
33: 40.0%
34: 65.0%
35: 35.0%
36: 45.0%
37: 45.0%
38: 50.0%
39: 70.0%
40: 65.0%
41: 45.0%
42: 60.0%
43: 45.0%
44: 50.0%
45: 60.0%
46: 60.0%
47: 30.0%
48: 70.0%
49: 50.0%
50: 40.0%
51: 60.0%
52: 35.0%
53: 60.0%
54: 55.0%
55: 45.0%
56: 35.0%
57: 45.0%
58: 50.0%
59: 45.0%
60: 55.0%
61: 25.0%
62: 45.0%
63: 45.0%
64: 50.0%
65: 35.0%
66: 55.0%
67: 40.0%
68: 45.0%
69: 60.0%
70: 35.0%
71: 60.0%
72: 45.0%
73: 65.0%
74: 55.0%
75: 60.0%
76: 55.0%
77: 60.0%
78: 25.0%
79: 55.0%
80: 40.0%
81: 35.0%
82: 30.0%
83: 50.0%
84: 40.0%
85: 35.0%
86: 65.0%
87: 65.0%
88: 50.0%
89: 40.0%
90: 35.0%
91: 40.0%
92: 40.0%
93: 35.0%
94: 40.0%
95: 40.0%
96: 45.0%
97: 60.0%
98: 60.0%
99: 40.0%
100: 20.0%
101: 30.0%
102: 15.0%
103: 60.0%
104: 65.0%
105: 40.0%
106: 30.0%
107: 50.0%
108: 50.0%
109: 50.0%
110: 50.0%
111: 60.0%
112: 50.0%
113: 35.0%
114: 50.0%
115: 50.0%
116: 35.0%
117: 55.0%
118: 35.0%
119: 30.0%
120: 40.0%
121: 15.0%
122: 35.0%
123: 45.0%
124: 50.0%
125: 50.0%
126: 65.0%
127: 60.0%
128: 45.0%
129: 60.0%
130: 45.0%
131: 55.0%
132: 35.0%
133: 60.0%
134: 40.0%
135: 30.0%
136: 45.0%
137: 60.0%
138: 40.0%
139: 50.0%
140: 30.0%
141: 60.0%
142: 50.0%
143: 50.0%
144: 55.0%
145: 40.0%
146: 50.0%
147: 50.0%
148: 35.0%
149: 45.0%
150: 30.0%
151: 40.0%
152: 45.0%
153: 50.0%
154: 25.0%
155: 45.0%
156: 40.0%
157: 65.0%
158: 40.0%
159: 55.0%
160: 40.0%
161: 40.0%
162: 45.0%
163: 55.0%
164: 45.0%
165: 50.0%
166: 35.0%
167: 45.0%
168: 30.0%
169: 50.0%
170: 45.0%
171: 70.0%
172: 50.0%
173: 30.0%
174: 50.0%
175: 30.0%
176: 45.0%
177: 45.0%
178: 45.0%
179: 40.0%
180: 50.0%
181: 60.0%
182: 25.0%
183: 45.0%
184: 65.0%
185: 30.0%
186: 65.0%
187: 55.0%
188: 50.0%
189: 25.0%
190: 45.0%
191: 40.0%
192: 50.0%
193: 55.0%
194: 40.0%
195: 50.0%
196: 45.0%
197: 40.0%
198: 25.0%
199: 45.0%
200: 45.0%
201: 45.0%
202: 50.0%
203: 30.0%
204: 25.0%
205: 25.0%
206: 55.0%
207: 45.0%
208: 60.0%
209: 55.0%
210: 25.0%
211: 40.0%
212: 35.0%
213: 40.0%
214: 40.0%
215: 35.0%
216: 40.0%
217: 30.0%
218: 50.0%
219: 40.0%
220: 25.0%
221: 25.0%
222: 30.0%
223: 50.0%
224: 5.0%
225: 30.0%
226: 30.0%
227: 45.0%
228: 55.0%
229: 35.0%
230: 30.0%
231: 25.0%
232: 20.0%
233: 20.0%
234: 40.0%
235: 25.0%
236: 40.0%
237: 25.0%
238: 30.0%
239: 35.0%
240: 30.0%
241: 10.0%
242: 25.0%
243: 15.0%
244: 35.0%
245: 60.0%
246: 35.0%
247: 15.0%
248: 15.0%
249: 40.0%
250: 15.0%
251: 10.0%
252: 10.0%
253: 45.0%
254: 15.0%
255: 40.0%
256: 20.0%
257: 15.0%
258: 20.0%
259: 35.0%
260: 35.0%
261: 20.0%
262: 15.0%
263: 20.0%
264: 10.0%
265: 25.0%
266: 30.0%
267: 35.0%
268: 30.0%
269: 35.0%
270: 25.0%
271: 25.0%
272: 30.0%
273: 30.0%
274: 30.0%
275: 30.0%
276: 15.0%
277: 30.0%
278: 10.0%
279: 30.0%
280: 20.0%
281: 5.0%
282: 10.0%
283: 20.0%
284: 10.0%
285: 15.0%
286: 25.0%
287: 15.0%
288: 30.0%
289: 25.0%
290: 35.0%
291: 30.0%
292: 15.0%
293: 0.0%
294: 20.0%
295: 10.0%
296: 25.0%
297: 25.0%
298: 5.0%
299: 20.0%
300: 30.0%
301: 15.0%
302: 20.0%
303: 15.0%
304: 15.0%
305: 25.0%
306: 30.0%
307: 25.0%
308: 15.0%
309: 20.0%
310: 25.0%
311: 30.0%
312: 25.0%
313: 20.0%
314: 20.0%
315: 15.0%
316: 20.0%
317: 5.0%
318: 35.0%
319: 25.0%
320: 15.0%
321: 15.0%
322: 30.0%
323: 20.0%
324: 25.0%
325: 30.0%
326: 10.0%
327: 20.0%
328: 20.0%
329: 30.0%
330: 10.0%
331: 35.0%
332: 15.0%
333: 5.0%
334: 0.0%
335: 30.0%
336: 25.0%
337: 30.0%
338: 15.0%
339: 25.0%
340: 20.0%
341: 10.0%
342: 30.0%
343: 20.0%
344: 15.0%
345: 10.0%
346: 30.0%
347: 5.0%
348: 25.0%
349: 20.0%
350: 20.0%
351: 35.0%
352: 20.0%
353: 10.0%
354: 15.0%
355: 0.0%
356: 5.0%
357: 20.0%
358: 20.0%
359: 10.0%
360: 5.0%
361: 10.0%
362: 20.0%
363: 25.0%
364: 15.0%
365: 10.0%
366: 15.0%
367: 30.0%
368: 35.0%
369: 15.0%
370: 20.0%
371: 5.0%
372: 10.0%
373: 25.0%
374: 10.0%
375: 35.0%
376: 25.0%
377: 20.0%
378: 25.0%
379: 50.0%
380: 30.0%
381: 20.0%
382: 10.0%
383: 15.0%
384: 15.0%
385: 20.0%
386: 20.0%
387: 35.0%
388: 20.0%
389: 25.0%
390: 20.0%
391: 20.0%
392: 30.0%
393: 10.0%
394: 35.0%
395: 30.0%
396: 20.0%
397: 20.0%
398: 10.0%
399: 25.0%
400: 20.0%
401: 10.0%
402: 30.0%
403: 35.0%
404: 40.0%
405: 10.0%
406: 30.0%
407: 15.0%
408: 10.0%
409: 10.0%
410: 5.0%
411: 10.0%
412: 5.0%
413: 30.0%
414: 35.0%
415: 15.0%
416: 25.0%
417: 20.0%
418: 20.0%
419: 15.0%
420: 15.0%
421: 20.0%
422: 20.0%
423: 15.0%
424: 20.0%
425: 25.0%
426: 10.0%
427: 30.0%
428: 25.0%
429: 30.0%
430: 15.0%
431: 15.0%
432: 10.0%
433: 10.0%
434: 20.0%
435: 10.0%
436: 15.0%
437: 15.0%
438: 15.0%
439: 20.0%
440: 10.0%
441: 15.0%
442: 25.0%
443: 10.0%
444: 35.0%
445: 25.0%
446: 30.0%
447: 10.0%
448: 15.0%
449: 20.0%
450: 10.0%
451: 10.0%
452: 20.0%
453: 15.0%
454: 10.0%
455: 15.0%
456: 15.0%
457: 15.0%
458: 15.0%
459: 20.0%
460: 5.0%
461: 20.0%
462: 15.0%
463: 15.0%
464: 15.0%
465: 25.0%
466: 15.0%
467: 5.0%
468: 10.0%
469: 25.0%
470: 30.0%
471: 15.0%
472: 30.0%
473: 10.0%
474: 15.0%
475: 10.0%
476: 10.0%
477: 15.0%
478: 10.0%
479: 50.0%
480: 10.0%
481: 10.0%
482: 15.0%
483: 35.0%
484: 25.0%
485: 30.0%
486: 20.0%
487: 25.0%
488: 0.0%
489: 20.0%
490: 25.0%
491: 15.0%
492: 25.0%
493: 0.0%
494: 20.0%
495: 30.0%
496: 25.0%
497: 10.0%
498: 25.0%
499: 25.0%
500: 25.0%
501: 15.0%
502: 35.0%
503: 15.0%
504: 20.0%
505: 15.0%
506: 5.0%
507: 25.0%
508: 15.0%
509: 5.0%
510: 5.0%
511: 10.0%
512: 20.0%
513: 30.0%
514: 20.0%
515: 20.0%
516: 35.0%
517: 5.0%
518: 15.0%
519: 20.0%
520: 20.0%
521: 20.0%
522: 15.0%
523: 45.0%
524: 20.0%
525: 30.0%
526: 20.0%
527: 10.0%
528: 20.0%
529: 10.0%
530: 15.0%
531: 10.0%
532: 20.0%
533: 10.0%
534: 15.0%
535: 15.0%
536: 10.0%
537: 10.0%
538: 10.0%
539: 25.0%
540: 30.0%
541: 25.0%
542: 25.0%
543: 15.0%
544: 20.0%
545: 25.0%
546: 20.0%
547: 25.0%
548: 15.0%
549: 20.0%
550: 20.0%
551: 15.0%
552: 20.0%
553: 20.0%
554: 15.0%
555: 0.0%
556: 40.0%
557: 25.0%
558: 20.0%
559: 5.0%
560: 10.0%
561: 0.0%
562: 10.0%
563: 5.0%
564: 20.0%
565: 15.0%
566: 5.0%
567: 30.0%
568: 30.0%
569: 35.0%
570: 10.0%
571: 30.0%
572: 15.0%
573: 25.0%
574: 5.0%
575: 30.0%
576: 15.0%
577: 15.0%
578: 20.0%
579: 15.0%
580: 20.0%
581: 10.0%
582: 0.0%
583: 10.0%
584: 15.0%
585: 20.0%
586: 25.0%
587: 20.0%
588: 5.0%
589: 25.0%
590: 45.0%
591: 15.0%
592: 15.0%
593: 15.0%
594: 20.0%
595: 30.0%
596: 25.0%
597: 0.0%
598: 10.0%
599: 20.0%
600: 20.0%
601: 10.0%
602: 15.0%
603: 10.0%
604: 10.0%
605: 10.0%
606: 25.0%
607: 20.0%
608: 25.0%
609: 10.0%
610: 25.0%
611: 10.0%
612: 20.0%
613: 10.0%
614: 10.0%
615: 10.0%
616: 20.0%
617: 5.0%
618: 20.0%
619: 35.0%
620: 15.0%
621: 20.0%
622: 20.0%
623: 10.0%
624: 15.0%
625: 30.0%
626: 25.0%
627: 10.0%
628: 5.0%
629: 20.0%
630: 0.0%
631: 25.0%
632: 25.0%
633: 20.0%
634: 20.0%
635: 30.0%
636: 25.0%
637: 15.0%
638: 25.0%
639: 0.0%
640: 15.0%
641: 15.0%
642: 15.0%
643: 10.0%
644: 5.0%
645: 10.0%
646: 5.0%
647: 10.0%
648: 25.0%
649: 15.0%
650: 10.0%
651: 0.0%
652: 20.0%
653: 35.0%
654: 15.0%
655: 10.0%
656: 20.0%
657: 10.0%
658: 5.0%
659: 15.0%
660: 40.0%
661: 5.0%
662: 25.0%
663: 20.0%
664: 15.0%
665: 30.0%
666: 15.0%
667: 35.0%
668: 30.0%
669: 35.0%
670: 20.0%
671: 20.0%
672: 20.0%
673: 15.0%
674: 25.0%
675: 20.0%
676: 5.0%
677: 10.0%
678: 10.0%
679: 20.0%
680: 15.0%
681: 20.0%
682: 15.0%
683: 10.0%
684: 0.0%
685: 15.0%
686: 15.0%
687: 25.0%
688: 25.0%
689: 5.0%
690: 15.0%
691: 5.0%
692: 15.0%
693: 20.0%
694: 15.0%
695: 25.0%
696: 20.0%
697: 15.0%
698: 20.0%
699: 30.0%
700: 35.0%
701: 25.0%
702: 15.0%
703: 10.0%
704: 5.0%
705: 10.0%
706: 10.0%
707: 15.0%
708: 10.0%
709: 15.0%
710: 15.0%
711: 30.0%
712: 30.0%
713: 15.0%
714: 25.0%
715: 10.0%
716: 10.0%
717: 30.0%
718: 15.0%
719: 20.0%
720: 10.0%
721: 25.0%
722: 25.0%
723: 20.0%
724: 10.0%
725: 15.0%
726: 10.0%
727: 15.0%
728: 40.0%
729: 10.0%
730: 15.0%
731: 15.0%
732: 10.0%
733: 20.0%
734: 10.0%
735: 25.0%
736: 10.0%
737: 15.0%
738: 15.0%
739: 5.0%
740: 25.0%
741: 5.0%
742: 20.0%
743: 15.0%
744: 15.0%
745: 10.0%
746: 20.0%
747: 5.0%
748: 10.0%
749: 15.0%
750: 10.0%
751: 10.0%
752: 20.0%
753: 10.0%
754: 5.0%
755: 25.0%
756: 10.0%
757: 30.0%
758: 25.0%
759: 10.0%
760: 5.0%
761: 25.0%
762: 15.0%
763: 20.0%
764: 20.0%
765: 15.0%
766: 20.0%
767: 5.0%
768: 20.0%
769: 15.0%
770: 20.0%
771: 25.0%
772: 10.0%
773: 20.0%
774: 10.0%
775: 15.0%
776: 5.0%
777: 15.0%
778: 25.0%
779: 15.0%
780: 20.0%
781: 15.0%
782: 20.0%
783: 15.0%
784: 30.0%
785: 35.0%
786: 25.0%
787: 20.0%
788: 25.0%
789: 5.0%
790: 25.0%
791: 10.0%
792: 15.0%
793: 15.0%
794: 30.0%
795: 10.0%
796: 10.0%
797: 5.0%
798: 5.0%
799: 20.0%
800: 10.0%
801: 5.0%
802: 25.0%
803: 10.0%
804: 30.0%
805: 25.0%
806: 5.0%
807: 25.0%
808: 10.0%
809: 15.0%
810: 10.0%
811: 20.0%
812: 10.0%
813: 10.0%
814: 20.0%
815: 10.0%
816: 5.0%
817: 10.0%
818: 10.0%
819: 30.0%
820: 5.0%
821: 10.0%
822: 10.0%
823: 10.0%
824: 40.0%
825: 15.0%
826: 10.0%
827: 20.0%
828: 15.0%
829: 20.0%
830: 10.0%
831: 15.0%
832: 10.0%
833: 10.0%
834: 20.0%
835: 10.0%
836: 15.0%
837: 45.0%
838: 5.0%
839: 5.0%
840: 25.0%
841: 30.0%
842: 30.0%
843: 5.0%
844: 20.0%
845: 5.0%
846: 15.0%
847: 15.0%
848: 15.0%
849: 10.0%
850: 0.0%
851: 20.0%
852: 20.0%
853: 25.0%
854: 5.0%
855: 25.0%
856: 5.0%
857: 0.0%
858: 5.0%
859: 20.0%
860: 10.0%
861: 40.0%
862: 20.0%
863: 35.0%
864: 5.0%
865: 5.0%
866: 20.0%
867: 10.0%
868: 15.0%
869: 10.0%
870: 20.0%
871: 20.0%
872: 20.0%
873: 15.0%
874: 15.0%
875: 15.0%
876: 20.0%
877: 15.0%
878: 15.0%
879: 20.0%
880: 10.0%
881: 20.0%
882: 15.0%
883: 10.0%
884: 20.0%
885: 15.0%
886: 10.0%
887: 15.0%
888: 20.0%
889: 20.0%
890: 10.0%
891: 10.0%
892: 20.0%
893: 15.0%
894: 10.0%
895: 15.0%
896: 15.0%
897: 10.0%
898: 10.0%
899: 20.0%
900: 15.0%
901: 30.0%
902: 10.0%
903: 15.0%
904: 25.0%
905: 10.0%
906: 20.0%
907: 5.0%
908: 25.0%
909: 25.0%
910: 15.0%
911: 30.0%
912: 25.0%
913: 25.0%
914: 25.0%
915: 15.0%
916: 15.0%
917: 25.0%
918: 15.0%
919: 5.0%
920: 25.0%
921: 20.0%
922: 5.0%
923: 10.0%
924: 20.0%
925: 10.0%
926: 20.0%
927: 30.0%
928: 25.0%
929: 10.0%
930: 10.0%
931: 0.0%
932: 25.0%
933: 5.0%
934: 5.0%
935: 15.0%
936: 20.0%
937: 20.0%
938: 10.0%
939: 15.0%
940: 20.0%
941: 10.0%
942: 5.0%
943: 15.0%
944: 20.0%
945: 20.0%
946: 15.0%
947: 10.0%
948: 10.0%
949: 10.0%
950: 15.0%
951: 25.0%
952: 10.0%
953: 25.0%
954: 15.0%
955: 30.0%
956: 10.0%
957: 15.0%
958: 10.0%
959: 30.0%
960: 15.0%
961: 15.0%
962: 30.0%
963: 5.0%
964: 15.0%
965: 15.0%
966: 20.0%
967: 20.0%
968: 20.0%
969: 0.0%
970: 10.0%
971: 10.0%
972: 10.0%
973: 20.0%
974: 20.0%
975: 20.0%
976: 15.0%
977: 15.0%
978: 0.0%
979: 10.0%
980: 5.0%
981: 20.0%
982: 5.0%
983: 10.0%
984: 5.0%
985: 25.0%
986: 20.0%
987: 25.0%
988: 10.0%
989: 15.0%
990: 0.0%
991: 15.0%
992: 5.0%
993: 25.0%
994: 15.0%
995: 5.0%
996: 20.0%
997: 5.0%
998: 10.0%
999: 5.0%
1000: 25.0%
1001: 15.0%
1002: 10.0%
1003: 25.0%
1004: 35.0%
1005: 10.0%
1006: 20.0%
1007: 15.0%
1008: 10.0%
1009: 10.0%
1010: 10.0%
1011: 5.0%
1012: 15.0%
1013: 5.0%
1014: 10.0%
1015: 10.0%
1016: 15.0%
1017: 30.0%
1018: 10.0%
1019: 15.0%
1020: 15.0%
1021: 25.0%
1022: 20.0%
1023: 30.0%
1024: 5.0%
1025: 15.0%
1026: 25.0%
1027: 10.0%
1028: 15.0%
1029: 15.0%
1030: 5.0%
1031: 5.0%
1032: 5.0%
1033: 10.0%
1034: 20.0%
1035: 0.0%
1036: 25.0%
1037: 10.0%
1038: 5.0%
1039: 15.0%
1040: 15.0%
1041: 20.0%
1042: 5.0%
1043: 10.0%
1044: 15.0%
1045: 10.0%
1046: 15.0%
1047: 15.0%
1048: 5.0%
1049: 10.0%
1050: 25.0%
1051: 15.0%
1052: 15.0%
1053: 25.0%
1054: 20.0%
1055: 15.0%
1056: 0.0%
1057: 20.0%
1058: 15.0%
1059: 30.0%
1060: 10.0%
1061: 10.0%
1062: 10.0%
1063: 15.0%
1064: 10.0%
1065: 5.0%
1066: 20.0%
1067: 25.0%
1068: 10.0%
1069: 0.0%
1070: 15.0%
1071: 5.0%
1072: 0.0%
1073: 15.0%
1074: 5.0%
1075: 20.0%
1076: 20.0%
1077: 15.0%
1078: 5.0%
1079: 15.0%
1080: 15.0%
1081: 25.0%
1082: 35.0%
1083: 10.0%
1084: 20.0%
1085: 20.0%
1086: 25.0%
1087: 10.0%
1088: 20.0%
1089: 10.0%
1090: 10.0%
1091: 15.0%
1092: 10.0%
1093: 30.0%
1094: 25.0%
1095: 20.0%
1096: 25.0%
1097: 15.0%
1098: 5.0%
1099: 5.0%
1100: 0.0%
1101: 5.0%
1102: 15.0%
1103: 15.0%
1104: 25.0%
1105: 10.0%
1106: 35.0%
1107: 0.0%
1108: 15.0%
1109: 5.0%
1110: 15.0%
1111: 15.0%
1112: 5.0%
1113: 15.0%
1114: 0.0%
1115: 20.0%
1116: 10.0%
1117: 5.0%
1118: 10.0%
1119: 10.0%
1120: 10.0%
1121: 25.0%
1122: 20.0%
1123: 20.0%
1124: 25.0%
1125: 15.0%
1126: 5.0%
1127: 10.0%
1128: 10.0%
1129: 0.0%
1130: 0.0%
1131: 20.0%
1132: 20.0%
1133: 10.0%
1134: 20.0%
1135: 5.0%
1136: 10.0%
1137: 15.0%
1138: 10.0%
1139: 5.0%
1140: 20.0%
1141: 20.0%
1142: 15.0%
1143: 25.0%
1144: 0.0%
1145: 30.0%
1146: 20.0%
1147: 10.0%
1148: 15.0%
1149: 15.0%
1150: 10.0%
1151: 5.0%
1152: 20.0%
1153: 10.0%
1154: 30.0%
1155: 5.0%
1156: 20.0%
1157: 15.0%
1158: 10.0%
1159: 10.0%
1160: 10.0%
1161: 15.0%
1162: 10.0%
1163: 20.0%
1164: 15.0%

In [ ]: