This notebook illustrates several models to find the next value of a time-series:
In [ ]:
# You must update BUCKET, PROJECT, and REGION to proceed with the lab
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
SEQ_LEN = 50
In [ ]:
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['SEQ_LEN'] = str(SEQ_LEN)
os.environ['TFVERSION'] = '1.15'
Essentially a set of sinusoids with random amplitudes and frequencies.
In [ ]:
import warnings
warnings.filterwarnings("ignore")
import tensorflow as tf
print(tf.__version__)
In [ ]:
import numpy as np
import seaborn as sns
def create_time_series():
freq = (np.random.random()*0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
noise = [np.random.random()*0.3 for i in range(SEQ_LEN)] # -0.3 to +0.3 uniformly distributed
x = np.sin(np.arange(0,SEQ_LEN) * freq) * ampl + noise
return x
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71"]
for i in range(0, 5):
sns.tsplot( create_time_series(), color=flatui[i%len(flatui)] ); # 5 series
In [ ]:
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in range(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
import os
try:
os.makedirs('data/sines/')
except OSError:
pass
np.random.seed(1) # makes data generation reproducible
to_csv('data/sines/train-1.csv', 1000) # 1000 sequences
to_csv('data/sines/valid-1.csv', 250)
In [ ]:
!head -5 data/sines/*-1.csv
Make sure the code works as intended.
Please remember to update the "--model=" variable on the last line of the command
You may ignore any tensorflow deprecation warnings.
Note: This step will be complete when you see a message similar to the following: "INFO : tensorflow :Loss for final step: N.NNN...N"
In [ ]:
%%bash
DATADIR=$(pwd)/data/sines
OUTDIR=$(pwd)/trained/sines
rm -rf $OUTDIR
gcloud ai-platform local train \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
-- \
--train_data_path="${DATADIR}/train-1.csv" \
--eval_data_path="${DATADIR}/valid-1.csv" \
--output_dir=${OUTDIR} \
--model=linear --train_steps=10 --sequence_length=$SEQ_LEN
Now to train on Cloud AI Platform with more data.
In [ ]:
import shutil
shutil.rmtree('data/sines', ignore_errors=True)
os.makedirs('data/sines/')
np.random.seed(1) # makes data generation reproducible
for i in range(0,10):
to_csv('data/sines/train-{}.csv'.format(i), 1000) # 1000 sequences
to_csv('data/sines/valid-{}.csv'.format(i), 250)
In [ ]:
%%bash
gsutil -m rm -rf gs://${BUCKET}/sines/*
gsutil -m cp data/sines/*.csv gs://${BUCKET}/sines
In [ ]:
%%bash
for MODEL in linear dnn cnn rnn rnn2 rnnN; do
OUTDIR=gs://${BUCKET}/sinewaves/${MODEL}
JOBNAME=sines_${MODEL}_$(date -u +%y%m%d_%H%M%S)
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=sinemodel.task \
--package-path=${PWD}/sinemodel \
--job-dir=$OUTDIR \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/sines/train*.csv" \
--eval_data_path="gs://${BUCKET}/sines/valid*.csv" \
--output_dir=$OUTDIR \
--train_steps=3000 --sequence_length=$SEQ_LEN --model=$MODEL
done
To activate TensorBoard within the JupyterLab UI navigate to "File" - "New Launcher". Then double-click the 'Tensorboard' icon on the bottom row.
TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.
You may close the TensorBoard tab when you are finished exploring.
When I ran it, these were the RMSEs that I got for different models. Your results will vary:
Model | Sequence length | # of steps | Minutes | RMSE |
---|---|---|---|---|
linear | 50 | 3000 | 10 min | 0.150 |
dnn | 50 | 3000 | 10 min | 0.101 |
cnn | 50 | 3000 | 10 min | 0.105 |
rnn | 50 | 3000 | 11 min | 0.100 |
rnn2 | 50 | 3000 | 14 min | 0.105 |
rnnN | 50 | 3000 | 15 min | 0.097 |
You can see there is a significant improvement when switching from the linear model to non-linear models. But within the the non-linear models (DNN/CNN/RNN) performance for all is pretty similar.
Perhaps it's because this is too simple of a problem to require advanced deep learning models. In the next lab we'll deal with a problem where an RNN is more appropriate.
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License