I started here: Deep Learning tutorial
In [1]:
import theano
In [19]:
import theano.tensor as T
# cf. https://github.com/lisa-lab/DeepLearningTutorials/blob/c4db2098e6620a0ac393f291ec4dc524375e96fd/code/logistic_sgd.py
In [2]:
import cPickle, gzip, numpy
In [1]:
import os
In [16]:
os.getcwd()
Out[16]:
In [5]:
os.listdir( os.getcwd() )
Out[5]:
In [6]:
f = gzip.open('./Data/mnist.pkl.gz')
In [7]:
train_set, valid_set, test_set = cPickle.load(f)
In [8]:
f.close()
In [10]:
type(train_set), type(valid_set), type(test_set)
Out[10]:
In [16]:
type(train_set[0]), type(train_set[1])
Out[16]:
In [20]:
def shared_dataset(data_xy):
""" Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behavior if the data is not in a shared
variable) would lead to a large decrease in performance.
"""
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will ahve to cast it to int. This little hack
# lets us get around this issue
return shared_x, T.cast(shared_y, 'int32')
In [21]:
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
In [22]:
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * batch_size: 3 * batch_size]
label = train_set_y[2 * batch_size: 3 * batch_size]
In [24]:
dir(train_set_x)
Out[24]:
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python myscriptIwanttorunonthegpu.py
From theano's documentation, "Using the GPU", "Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware, but float64 computations are still relatively slow (Jan 2010)." Hence floatX=float32.
I ran the script logistic_sgd.py locally, that's found in DeepLearningTutorials from lisa-lab's github
In [8]:
os.listdir("../DeepLearningTutorials/code")
Out[8]:
In [6]:
import subprocess
In [10]:
subprocess.call(['python','../DeepLearningTutorials/code/logistic_sgd.py'])
Out[10]:
In [9]:
subprocess.call(['THEANO_FLAGS=device=gpu,floatX=float32 python',
'../DeepLearningTutorials/code/logistic_sgd.py'])
In [15]:
execfile('../DeepLearningTutorials/code/logistic_sgd_b.py')
In [18]:
os.listdir( '../' )
Out[18]:
In [19]:
import sklearn
In [ ]: