x와 y의 상관관계를 분석하는 기초적인 선형 회귀 모델을 만들고 실행해봅니다.
y = x 관계를 나타내는 x = [1, 2, 3], y = [1, 2, 3]을 이용하여 기울기 w가 1이고, 절편 b가 0인 것을 산출하는 코드입니다.
In [1]:
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([[1], [2], [3]]))
y = Variable(torch.Tensor([[1], [2], [3]]))
w = Variable(torch.randn(1, 1), requires_grad = True)
b = Variable(torch.randn(1), requires_grad = True)
learning_rate = 1e-2
# trainning
for i in range(1000) :
# network model
y_pred = torch.mm(x, w)
y_pred += b.unsqueeze(1).expand_as(y_pred)
loss = (y_pred - y).pow(2).sum()
# initialize network mdoel parameter grad
w.grad.data.zero_()
b.grad.data.zero_()
# update w, b
loss.backward()
w.data -= learning_rate * w.grad.data
b.data -= learning_rate * b.grad.data
# training result
print ('w = ', w.data[0][0], ', b=', b.data[0])
In [2]:
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([[1], [2], [3]]))
y = Variable(torch.Tensor([[1], [2], [3]]))
w = Variable(torch.randn(1, 1), requires_grad = True)
b = Variable(torch.randn(1), requires_grad = True)
optimizer = torch.optim.Adam((w, b), lr=1e-2)
# trainning
for i in range(1000) :
# network model
y_pred = torch.mm(x, w)
y_pred += b.unsqueeze(1).expand_as(y_pred)
loss = (y_pred - y).pow(2).sum()
# initialize network mdoel parameter grad
optimizer.zero_grad()
# optimizaer step
loss.backward()
optimizer.step()
# training result
print ('w = ', w.data[0][0], ', b=', b.data[0])
In [3]:
import torch
from torch.autograd import Variable
x = Variable(torch.Tensor([[1], [2], [3]]))
y = Variable(torch.Tensor([[1], [2], [3]]))
# define model, loss_function, optimizser
model = torch.nn.Sequential(torch.nn.Linear(1, 1, bias=True))
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-2)
# trainning
for i in range(1000) :
# network model
y_pred = model(x)
loss = loss_fn(y_pred, y)
# zero_grad, backward, step(update parameter) in series
optimizer.zero_grad()
loss.backward()
optimizer.step()
# training result
print(list(model.parameters()))
Coding의 Sytle의 차이점이 명확합니다. pyTorch는 trainning loop안에서
합니다.
반면에 TensorFlow은 위의 1, 2, 3과정을 static하게 Graph를 만든후, session을 생성하여 sess.run을 통해서 값을 산출합니다.
하지만, pytorch도 model, loss, optimizer를 trainning loop전에 선언하여 TensorFlow코드랑 거의 동일한 형태입니다.
In [ ]:
import tensorflow as tf
x_data = [1, 2, 3]
y_data = [1, 2, 3]
w = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
x = tf.placeholder(tf.float32, name="X")
y = tf.placeholder(tf.float32, name="Y")
# network model, loss function
y_pred = tf.add(tf.mul(w, x), b)
loss = tf.reduce_mean(tf.square(y_pred - y))
# optimizser
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-2)
train_op = optimizer.minimize(loss)
with tf.Session() as sess:
# initialize network mdoel
sess.run(tf.global_variables_initializer())
# trainning
for step in range(1000):
# optimizaer step
sess.run(train_op, feed_dict={x: x_data, y: y_data})
print ('w = ', sess.run(w), ', b=', sess.run(b))