Deep Learning

Tensorflow


In [1]:
df <- read.csv('pesos-alturas.csv')
head(df)


PesosAlturas
74 1.73
61 1.62
61 1.63
68 1.68
70 1.68
73 1.75

In [2]:
library(tensorflow)

Separando dados de treino e teste:


In [7]:
set.seed(42) 
indices <- sample.int(n = nrow(df), size = floor(.75*nrow(df)), replace = FALSE)
df_treino <- df[indices,]
df_teste  <- df[-indices,]

In [12]:
nrow(df_treino)


224

In [13]:
nrow(df_teste)


75

Definindo os coeficientes (aquilo que queremos calcular): y = ax + b


In [17]:
a <- tf$Variable(rnorm(1), name="ca")
b <- tf$Variable(rnorm(1), name="cl")

Definindo os placeholders (x e y):


In [20]:
x <- tf$placeholder("float")
y <- tf$placeholder("float")

A função que queremos usar:


In [21]:
y_hat <- tf$add(tf$multiply(x, a), b)

A função que queremos minimizar:


In [23]:
perda <- tf$reduce_sum(tf$pow(y_hat - y, 2)/2)

Learning rate:


In [24]:
l_rate <- 0.001

In [26]:
generator <- tf$train$GradientDescentOptimizer(learning_rate = l_rate)
optimizer <- generator$minimize(perda)

In [27]:
init <- tf$global_variables_initializer()

In [28]:
sess = tf$Session()
sess$run(init)

Roda até que o valor da perda estacione:


In [32]:
feed_dict <- dict(x = df_treino$Alturas, y = df_treino$Pesos)
epsilon <- .Machine$double.eps
last_cost <- Inf
while (TRUE) {
  sess$run(optimizer, feed_dict = feed_dict)
  current_cost <- sess$run(perda, feed_dict = feed_dict)
  if (last_cost - current_cost < epsilon) break
  last_cost <- current_cost
}

Mostra os coeficientes calculados:


In [38]:
tf_coef <- c(sess$run(b), sess$run(a))
tf_coef


  1. -95.8333740234375
  2. 97.4041595458984

In [ ]: