export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/tensorflow-0.9.0-py3-none-any.whl
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow-0.9.0-cp34-cp34m-linux_x86_64.whl
export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.9.0-cp34-cp34m-linux_x86_64.whl
Finally
pip install --upgrade $TF_BINARY_URL
In [1]:
import numpy as np
import warnings
from scipy.optimize import minimize
from scipy.integrate import quad
from scipy.interpolate import interp1d
from scipy import stats
from importlib import reload
from src import sim_cts, sim_discrete
from scipy.stats import poisson, geom
import tensorflow as tf
In [2]:
## web graphics
%matplotlib inline
## interactive graphics
#%matplotlib notebook
## SVG graphics
#%config InlineBackend.figure_format = 'svg'
In [3]:
# plotting requirements
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
In [4]:
from src.rng import default_random_state
First we create a Graph, which is some kind of computational graph abstraction which will be executed later.
We then create a session object that encapsulates the connection between our graph and the weird morass of C++/GPU task distribution and graph compilation and so on.
A clear example of doing this is in the MNIST tutorial.
In [5]:
#This is making a Graph implicity (or at least, puting operations onto the default graph)
# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])
# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])
# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
In [6]:
# Launch the session, which will execute, per default, the default Graph when we request operations from it..
sess = tf.Session()
# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of three ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print(result)
# ==> [[ 12.]]
# Close the Session when we're done.
sess.close()
In [7]:
with tf.Session() as sess:
result = sess.run(product)
print(result)
In [8]:
# Enter an interactive TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()
# Add an op to subtract 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print(sub.eval())
# ==> [-2. -1.]
In [9]:
# Close the Session when we're done.
sess.close()
placeholder: input data that we expect to be filled in with, e.g. minibatches of observations,Variables: parameters that we might expect to update inside the loopScope: a scope to manage shared variables, like a class instance's member variables (which would be an alternative and also simple way to manage this thingy.)
In [10]:
x_data = np.random.rand(10000).astype(np.float32)
y_data = x_data * 5 + 0.3 + np.random.normal(scale=2, size=x_data.size)
In [11]:
plt.scatter(x_data, y_data, marker='.');
In [12]:
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
y = W * x_data + b
In [13]:
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
In [15]:
sess = tf.Session()
sess.run(tf.initialize_all_variables())
for step in range(201):
sess.run(train)
if step % 20 == 0:
print(step, sess.run(W), sess.run(b))
In [16]:
sess.close()
So that worked beautifully with TF's own built-in optimizer. But how do we use an external optimiser?
tf.contrib.opt.ScipyOptimizerInterface.
More generally, internal loops can be constructed using control flow operations.
Now I want to know how to mask variables for use in my estimators
In [17]:
a=tf.Variable([[-3.3, -2, -1,0,1,2,3],[-3, -2, -1,0,1,2,3],[-3, -2, -1,0,1,2,3],], name="a")
b=tf.maximum(tf.sign(a), 0)
In [18]:
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
b.eval()
Out[18]:
Surprising things are Ops, which must not only be created but then run in a separate step, e.g.
var.assign(val) # Doesn't do much
sess.run(var.assign(val)) # updates value of var
Getting values doesn't require creating intermediate Ops
var.eval(sess)
It's annoying for moderately complex graphs, keeping a default graph monster around in global variables, so I don't recommend it. If you rebuild the graph you instead find you have accidentally concatenated it with your new graph. Confusing errors result.
Generally, if you don't want colliding ops I'd recommend passing graphs about deliberately; But beware, then Ops don't go in the correct graph per default; you have to do
with g.as_default_graph():
#blah
Additionally you might need to have a default sessions
with sess.as_default():
#blah
sess.close() #still necessary.
In [ ]: