Time series prediction using RNNs, with TensorFlow and Cloud ML Engine

This notebook illustrates:

  1. Creating a Recurrent Neural Network in TensorFlow
  2. Creating a Custom Estimator in tf.contrib.learn
  3. Training on Cloud ML Engine

Simulate some time-series data

Essentially a set of sinusoids with random amplitudes and frequencies.


In [ ]:
!pip install --upgrade tensorflow

In [1]:
import tensorflow as tf
print tf.__version__


1.3.0

In [2]:
import numpy as np
import tensorflow as tf
import seaborn as sns
import pandas as pd

SEQ_LEN = 10
def create_time_series():
  freq = (np.random.random()*0.5) + 0.1  # 0.1 to 0.6
  ampl = np.random.random() + 0.5  # 0.5 to 1.5
  x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl
  return x

for i in xrange(0, 5):
  sns.tsplot( create_time_series() );  # 5 series

In [3]:
def to_csv(filename, N):
  with open(filename, 'w') as ofp:
    for lineno in xrange(0, N):
      seq = create_time_series()
      line = ",".join(map(str, seq))
      ofp.write(line + '\n')

to_csv('train.csv', 1000)  # 1000 sequences
to_csv('valid.csv',  50)

In [4]:
!head -5 train.csv valid.csv


==> train.csv <==
0.0,0.458016136576,0.85649955982,1.14365558935,1.28215982494,1.25400955383,1.06286373448,0.733567406955,0.3089223468,-0.155876288843
0.0,0.0890643185319,0.17672202985,0.261588741517,0.342324139805,0.417653157487,0.486386111181,0.547437490213,0.599843100262,0.642775291037
0.0,0.208727575881,0.408651491509,0.591339405243,0.749085951283,0.875237734687,0.964473956626,1.01303083373,1.01886034599,0.981716617683
0.0,0.746214704723,1.28915133773,1.48090727645,1.26924584259,0.711826216378,-0.0395036340001,-0.780072208173,-1.30813950989,-1.47985350547
0.0,0.161050249938,0.305097831313,0.416935109958,0.484755012102,0.501397540673,0.465105682579,0.379710903063,0.254228643787,0.101906529426

==> valid.csv <==
0.0,0.716765082738,1.215678526,1.34510192806,1.06569869034,0.462389876775,-0.281456464577,-0.939757689789,-1.31243150603,-1.28620844215
0.0,0.215820464166,0.392256140776,0.497109519422,0.511246047417,0.432085968694,0.274075100037,0.066048633316,-0.15403096007,-0.346001649776
0.0,0.689946143628,1.15973012508,1.25944363908,0.957268053068,0.349627748835,-0.369578946271,-0.970852813346,-1.26232705835,-1.15099210279
0.0,0.415923514155,0.793843271173,1.09922799265,1.30417407036,1.38995517435,1.34873331486,1.18427501503,0.911607156171,0.555643941824
0.0,0.29662226229,0.502355945913,0.55416181051,0.436165924826,0.184523627493,-0.123658897482,-0.393950869856,-0.543531624032,-0.526567843122

RNN

For more info, see:

  1. http://colah.github.io/posts/2015-08-Understanding-LSTMs/ for the theory
  2. https://www.tensorflow.org/tutorials/recurrent for explanations
  3. https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb for sample code

Here, we are trying to predict from 8 values of a timeseries, the next two values.

Imports

Several tensorflow packages and shutil


In [5]:
import tensorflow as tf
import shutil
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
import tensorflow.contrib.metrics as metrics
import tensorflow.contrib.rnn as rnn

Input Fn to read CSV

Our CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time. The Estimator API in tf.contrib.learn wants the features returned as a dict. We'll just call this timeseries column 'rawdata'.

Our CSV file sequences consist of 10 numbers. We'll assume that 8 of them are inputs and we need to predict the next two.


In [6]:
DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)]
BATCH_SIZE = 20
TIMESERIES_COL = 'rawdata'
N_OUTPUTS = 2  # in each sequence, 1-8 are features, and 9-10 is label
N_INPUTS = SEQ_LEN - N_OUTPUTS

Reading data using the Estimator API in tf.learn requires an input_fn. This input_fn needs to return a dict of features and the corresponding labels.

So, we read the CSV file. The Tensor format here will be batchsize x 1 -- entire line. We then decode the CSV. At this point, all_data will contain a list of Tensors. Each tensor has a shape batchsize x 1. There will be 10 of these tensors, since SEQ_LEN is 10.

We split these 10 into 8 and 2 (N_OUTPUTS is 2). Put the 8 into a dict, call it features. The other 2 are the ground truth, so labels.


In [7]:
# read data and convert to needed format
def read_dataset(filename, mode=tf.contrib.learn.ModeKeys.TRAIN):  
  def _input_fn():
    num_epochs = 100 if mode == tf.contrib.learn.ModeKeys.TRAIN else 1
    
    # could be a path to one file or a file pattern.
    input_file_names = tf.train.match_filenames_once(filename)
    
    filename_queue = tf.train.string_input_producer(
        input_file_names, num_epochs=num_epochs, shuffle=True)
    reader = tf.TextLineReader()
    _, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE)

    value_column = tf.expand_dims(value, -1)
    print 'readcsv={}'.format(value_column)
    
    # all_data is a list of tensors
    all_data = tf.decode_csv(value_column, record_defaults=DEFAULTS)  
    inputs = all_data[:len(all_data)-N_OUTPUTS]  # first few values
    label = all_data[len(all_data)-N_OUTPUTS : ] # last few values
    
    # from list of tensors to tensor with one more dimension
    inputs = tf.concat(inputs, axis=1)
    label = tf.concat(label, axis=1)
    print 'inputs={}'.format(inputs)
    
    return {TIMESERIES_COL: inputs}, label   # dict of features, label
  return _input_fn

Define RNN

A recursive neural network consists of possibly stacked LSTM cells.

The RNN has one output per input, so it will have 8 output cells. We use only the last output cell, but rather use it directly, we do a matrix multiplication of that cell by a set of weights to get the actual predictions. This allows for a degree of scaling between inputs and predictions if necessary (we don't really need it in this problem).

Finally, to supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects.


In [8]:
LSTM_SIZE = 3  # number of hidden units in each of the LSTM cells

# create the inference model
def simple_rnn(features, targets, mode):
  # 0. Reformat input shape to become a sequence
  x = tf.split(features[TIMESERIES_COL], N_INPUTS, 1)
  #print 'x={}'.format(x)
    
  # 1. configure the RNN
  lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
  outputs, _ = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)

  # slice to keep only the last cell of the RNN
  outputs = outputs[-1]
  #print 'last outputs={}'.format(outputs)
  
  # output is result of linear activation of last layer of RNN
  weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
  bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
  predictions = tf.matmul(outputs, weight) + bias
    
  # 2. loss function, training/eval ops
  if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL:
     loss = tf.losses.mean_squared_error(targets, predictions)
     train_op = tf.contrib.layers.optimize_loss(
         loss=loss,
         global_step=tf.contrib.framework.get_global_step(),
         learning_rate=0.01,
         optimizer="SGD")
     eval_metric_ops = {
      "rmse": tf.metrics.root_mean_squared_error(targets, predictions)
     }
  else:
     loss = None
     train_op = None
     eval_metric_ops = None
  
  # 3. Create predictions
  predictions_dict = {"predicted": predictions}
  
  # 4. return ModelFnOps
  return tflearn.ModelFnOps(
      mode=mode,
      predictions=predictions_dict,
      loss=loss,
      train_op=train_op,
      eval_metric_ops=eval_metric_ops)

Experiment

Distributed training is launched off using an Experiment. The key line here is that we use tflearn.Estimator rather than, say tflearn.DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time.


In [9]:
def get_train():
  return read_dataset('train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN)

def get_valid():
  return read_dataset('valid.csv', mode=tf.contrib.learn.ModeKeys.EVAL)

def serving_input_fn():
    feature_placeholders = {
        TIMESERIES_COL: tf.placeholder(tf.float32, [None, N_INPUTS])
    }
  
    features = {
      key: tf.expand_dims(tensor, -1)
      for key, tensor in feature_placeholders.items()
    }
    features[TIMESERIES_COL] = tf.squeeze(features[TIMESERIES_COL], axis=[2])
    
    print 'serving: features={}'.format(features[TIMESERIES_COL])
    
    return tflearn.utils.input_fn_utils.InputFnOps(
      features,
      None,
      feature_placeholders
    )

from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
def experiment_fn(output_dir):
    # run experiment
    return tflearn.Experiment(
        tflearn.Estimator(model_fn=simple_rnn, model_dir=output_dir),
        train_input_fn=get_train(),
        eval_input_fn=get_valid(),
        eval_metrics={
            'rmse': tflearn.MetricSpec(
                metric_fn=metrics.streaming_root_mean_squared_error
            )
        },
        export_strategies=[saved_model_export_utils.make_export_strategy(
            serving_input_fn,
            default_output_alternative_key=None,
            exports_to_keep=1
        )]
    )

shutil.rmtree('outputdir', ignore_errors=True) # start fresh each time
learn_runner.run(experiment_fn, 'outputdir')


INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': None, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x116e323d0>, '_model_dir': 'outputdir', '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_session_config': None, '_tf_random_seed': None, '_save_summary_steps': 100, '_environment': 'local', '_num_worker_replicas': 0, '_task_id': 0, '_log_step_count_steps': 100, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1
}
, '_evaluation_master': '', '_master': ''}
WARNING:tensorflow:From /usr/local/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/monitors.py:269: __init__ (from tensorflow.contrib.learn.python.learn.monitors) is deprecated and will be removed after 2016-12-05.
Instructions for updating:
Monitors are deprecated. Please use tf.train.SessionRunHook.
readcsv=Tensor("ExpandDims:0", shape=(?, 1), dtype=string)
inputs=Tensor("concat:0", shape=(?, 8), dtype=float32)
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into outputdir/model.ckpt.
readcsv=Tensor("ExpandDims:0", shape=(?, 1), dtype=string)
inputs=Tensor("concat:0", shape=(?, 8), dtype=float32)
INFO:tensorflow:Starting evaluation at 2017-10-27-17:16:05
INFO:tensorflow:Restoring parameters from outputdir/model.ckpt-1
INFO:tensorflow:Evaluation [1/100]
INFO:tensorflow:Evaluation [2/100]
INFO:tensorflow:Evaluation [3/100]
INFO:tensorflow:Finished evaluation at 2017-10-27-17:16:06
INFO:tensorflow:Saving dict for global step 1: global_step = 1, loss = 1.14143, rmse = 1.05111
INFO:tensorflow:Validation (step 1): loss = 1.14143, global_step = 1, rmse = 1.05111
INFO:tensorflow:loss = 0.754056, step = 1
INFO:tensorflow:global_step/sec: 24.7194
INFO:tensorflow:loss = 0.558977, step = 101 (0.467 sec)
INFO:tensorflow:global_step/sec: 286.14
INFO:tensorflow:loss = 0.563291, step = 201 (0.350 sec)
INFO:tensorflow:global_step/sec: 283.305
INFO:tensorflow:loss = 0.555495, step = 301 (0.353 sec)
INFO:tensorflow:global_step/sec: 274.187
INFO:tensorflow:loss = 0.54341, step = 401 (0.364 sec)
INFO:tensorflow:global_step/sec: 285.101
INFO:tensorflow:loss = 0.528197, step = 501 (0.351 sec)
INFO:tensorflow:global_step/sec: 281.041
INFO:tensorflow:loss = 0.509221, step = 601 (0.356 sec)
INFO:tensorflow:global_step/sec: 242.753
INFO:tensorflow:loss = 0.485625, step = 701 (0.412 sec)
INFO:tensorflow:global_step/sec: 287.877
INFO:tensorflow:loss = 0.456866, step = 801 (0.347 sec)
INFO:tensorflow:global_step/sec: 277.193
INFO:tensorflow:loss = 0.423036, step = 901 (0.361 sec)
INFO:tensorflow:global_step/sec: 288.181
INFO:tensorflow:loss = 0.385049, step = 1001 (0.347 sec)
INFO:tensorflow:global_step/sec: 279.548
INFO:tensorflow:loss = 0.344566, step = 1101 (0.359 sec)
INFO:tensorflow:global_step/sec: 221.681
INFO:tensorflow:loss = 0.30366, step = 1201 (0.450 sec)
INFO:tensorflow:global_step/sec: 281.193
INFO:tensorflow:loss = 0.264373, step = 1301 (0.356 sec)
INFO:tensorflow:global_step/sec: 280.77
INFO:tensorflow:loss = 0.228317, step = 1401 (0.356 sec)
INFO:tensorflow:global_step/sec: 226.007
INFO:tensorflow:loss = 0.196465, step = 1501 (0.442 sec)
INFO:tensorflow:global_step/sec: 262.509
INFO:tensorflow:loss = 0.169129, step = 1601 (0.381 sec)
INFO:tensorflow:global_step/sec: 259.499
INFO:tensorflow:loss = 0.14612, step = 1701 (0.385 sec)
INFO:tensorflow:global_step/sec: 270.954
INFO:tensorflow:loss = 0.126949, step = 1801 (0.369 sec)
INFO:tensorflow:global_step/sec: 281.258
INFO:tensorflow:loss = 0.111015, step = 1901 (0.356 sec)
INFO:tensorflow:global_step/sec: 237.75
INFO:tensorflow:loss = 0.0977329, step = 2001 (0.421 sec)
INFO:tensorflow:global_step/sec: 275.634
INFO:tensorflow:loss = 0.0865939, step = 2101 (0.362 sec)
INFO:tensorflow:global_step/sec: 246.951
INFO:tensorflow:loss = 0.077181, step = 2201 (0.405 sec)
INFO:tensorflow:global_step/sec: 276.215
INFO:tensorflow:loss = 0.0691655, step = 2301 (0.362 sec)
INFO:tensorflow:global_step/sec: 251.153
INFO:tensorflow:loss = 0.0622921, step = 2401 (0.398 sec)
INFO:tensorflow:global_step/sec: 281.838
INFO:tensorflow:loss = 0.0563631, step = 2501 (0.355 sec)
INFO:tensorflow:global_step/sec: 255.668
INFO:tensorflow:loss = 0.0512246, step = 2601 (0.391 sec)
INFO:tensorflow:global_step/sec: 280.385
INFO:tensorflow:loss = 0.0467549, step = 2701 (0.357 sec)
INFO:tensorflow:global_step/sec: 282.336
INFO:tensorflow:loss = 0.0428558, step = 2801 (0.354 sec)
INFO:tensorflow:global_step/sec: 286.079
INFO:tensorflow:loss = 0.0394464, step = 2901 (0.350 sec)
INFO:tensorflow:global_step/sec: 269.206
INFO:tensorflow:loss = 0.0364588, step = 3001 (0.374 sec)
INFO:tensorflow:global_step/sec: 247.206
INFO:tensorflow:loss = 0.0338347, step = 3101 (0.402 sec)
INFO:tensorflow:global_step/sec: 278.772
INFO:tensorflow:loss = 0.031524, step = 3201 (0.359 sec)
INFO:tensorflow:global_step/sec: 273.998
INFO:tensorflow:loss = 0.0294836, step = 3301 (0.365 sec)
INFO:tensorflow:global_step/sec: 280.363
INFO:tensorflow:loss = 0.0276757, step = 3401 (0.357 sec)
INFO:tensorflow:global_step/sec: 287.854
INFO:tensorflow:loss = 0.0260684, step = 3501 (0.347 sec)
INFO:tensorflow:global_step/sec: 276.313
INFO:tensorflow:loss = 0.0246341, step = 3601 (0.362 sec)
INFO:tensorflow:global_step/sec: 284.722
INFO:tensorflow:loss = 0.0233495, step = 3701 (0.351 sec)
INFO:tensorflow:global_step/sec: 270.589
INFO:tensorflow:loss = 0.0221947, step = 3801 (0.369 sec)
INFO:tensorflow:global_step/sec: 279.52
INFO:tensorflow:loss = 0.021153, step = 3901 (0.358 sec)
INFO:tensorflow:global_step/sec: 283.809
INFO:tensorflow:loss = 0.02021, step = 4001 (0.352 sec)
INFO:tensorflow:global_step/sec: 232.594
INFO:tensorflow:loss = 0.0193537, step = 4101 (0.430 sec)
INFO:tensorflow:global_step/sec: 282.143
INFO:tensorflow:loss = 0.0185737, step = 4201 (0.354 sec)
INFO:tensorflow:global_step/sec: 283.757
INFO:tensorflow:loss = 0.0178614, step = 4301 (0.352 sec)
INFO:tensorflow:global_step/sec: 271.17
INFO:tensorflow:loss = 0.0172092, step = 4401 (0.369 sec)
INFO:tensorflow:global_step/sec: 275.106
INFO:tensorflow:loss = 0.0166105, step = 4501 (0.363 sec)
INFO:tensorflow:global_step/sec: 272.171
INFO:tensorflow:loss = 0.0160597, step = 4601 (0.368 sec)
INFO:tensorflow:global_step/sec: 268.768
INFO:tensorflow:loss = 0.0155517, step = 4701 (0.372 sec)
INFO:tensorflow:global_step/sec: 278.645
INFO:tensorflow:loss = 0.0150824, step = 4801 (0.359 sec)
INFO:tensorflow:global_step/sec: 250.418
INFO:tensorflow:loss = 0.0146479, step = 4901 (0.400 sec)
INFO:tensorflow:Saving checkpoints for 5000 into outputdir/model.ckpt.
INFO:tensorflow:Loss for final step: 0.0100389.
readcsv=Tensor("ExpandDims:0", shape=(?, 1), dtype=string)
inputs=Tensor("concat:0", shape=(?, 8), dtype=float32)
INFO:tensorflow:Starting evaluation at 2017-10-27-17:16:28
INFO:tensorflow:Restoring parameters from outputdir/model.ckpt-5000
INFO:tensorflow:Evaluation [1/100]
INFO:tensorflow:Evaluation [2/100]
INFO:tensorflow:Evaluation [3/100]
INFO:tensorflow:Finished evaluation at 2017-10-27-17:16:29
INFO:tensorflow:Saving dict for global step 5000: global_step = 5000, loss = 0.0144165, rmse = 0.11743
serving: features=Tensor("Squeeze:0", shape=(?, 8), dtype=float32)
INFO:tensorflow:Restoring parameters from outputdir/model.ckpt-5000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: outputdir/export/Servo/1509124590/saved_model.pb
Out[9]:
({'global_step': 5000, 'loss': 0.014416501, 'rmse': 0.11742987},
 ['outputdir/export/Servo/1509124590'])

Standalone Python module

To train this on Cloud ML Engine, we take the code in this notebook, make an standalone Python module.


In [1]:
%bash
# run module as-is
REPO=$(pwd)
echo $REPO
rm -rf outputdir
export PYTHONPATH=${PYTHONPATH}:${REPO}/simplernn
python -m trainer.task \
   --train_data_paths="${REPO}/train.csv*" \
   --eval_data_paths="${REPO}/valid.csv*"  \
   --output_dir=${REPO}/outputdir \
   --job-dir=./tmp


  File "<ipython-input-1-1e6da8f2dc23>", line 3
    REPO=$(pwd)
         ^
SyntaxError: invalid syntax

Try out online prediction. This is how the REST API will work after you train on Cloud ML Engine


In [ ]:
%writefile test.json
{"rawdata": [0.0,0.0527,0.10498,0.1561,0.2056,0.253,0.2978,0.3395]}

In [41]:
%bash
MODEL_DIR=$(ls ./outputdir/export/Servo/)
gcloud ml-engine local predict --model-dir=./outputdir/export/Servo/$MODEL_DIR --json-instances=test.json


predictions:
- predicted:
  - 0.456365
  - 0.48135
WARNING: 2017-06-27 17:52:12.098509: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-27 17:52:12.098577: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-06-27 17:52:12.098596: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
WARNING:root:MetaGraph has multiple signatures 2. Support for multiple signatures is limited. By default we select named signatures.

Cloud ML Engine

Now to train on Cloud ML Engine.


In [ ]:
%bash
# run module on Cloud ML Engine
REPO=$(pwd)
BUCKET=cloud-training-demos-ml # CHANGE AS NEEDED
OUTDIR=gs://${BUCKET}/simplernn/model_trained
JOBNAME=simplernn_$(date -u +%y%m%d_%H%M%S)
REGION=us-central1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
   --region=$REGION \
   --module-name=trainer.task \
   --package-path=${REPO}/simplernn/trainer \
   --job-dir=$OUTDIR \
   --staging-bucket=gs://$BUCKET \
   --scale-tier=BASIC \
   --runtime-version=1.2 \
   -- \
   --train_data_paths="gs://${BUCKET}/train.csv*" \
   --eval_data_paths="gs://${BUCKET}/valid.csv*"  \
   --output_dir=$OUTDIR \
   --num_epochs=100

Variant: long sequence

To create short sequences from a very long sequence.


In [10]:
import tensorflow as tf
import numpy as np

def breakup(sess, x, lookback_len):
  N = sess.run(tf.size(x))
  windows = [tf.slice(x, [b], [lookback_len]) for b in xrange(0, N-lookback_len)]
  windows = tf.stack(windows)
  return windows

x = tf.constant(np.arange(1,11, dtype=np.float32))
with tf.Session() as sess:
    print 'input=', x.eval()
    seqx = breakup(sess, x, 5)
    print 'output=', seqx.eval()


input= [  1.   2.   3.   4.   5.   6.   7.   8.   9.  10.]
output= [[ 1.  2.  3.  4.  5.]
 [ 2.  3.  4.  5.  6.]
 [ 3.  4.  5.  6.  7.]
 [ 4.  5.  6.  7.  8.]
 [ 5.  6.  7.  8.  9.]]

Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License


In [ ]: