Dropout

Dropout是一种用于深度神经网络的方法,用于避免过拟合.
在训练时,每次迭代中每层以提前设定的概率,称为keep_prob,来随机选择保留的节点,其他的节点在该次前向传播被忽略(设为0),同时在后向传播中也忽略.如下图中打叉的节点就是某次迭代中忽略的节点.

作用,应用

Dropout主要有两个作用:

  1. dropout在训练时每个迭代中相当于减小了网络规模,有正则化的作用.
  2. dropout可以避免把过多的权重放在某个节点上,而是把权重分散给全部的节点.

在实践中,dropout在CV领域比较有效,因为数据输入一般是图片像素,维度很高,而网络层的形状,一般和输入形状相关,往往也很大.

Inverted Dropout

传统的dropout方法在训练时随机扔掉的节点,利用反向传播来更新保留的节点.但是在验证和测试的时候,会为每个节点乘以所在层的保留概率(keep_prob),这个操作叫做scale. 试想,每次使用模型预测的时候我们都要scale,比较麻烦.

Hinton等人提出inverted dropout,也就是在训练过程中,把保留的节点除以keep_prob,叫做inverted scale,来弥补随机扔掉的节点.这样,在验证和测试的时候,就不需要scale了.


In [1]:
keep_prob = 0.5
do_dropout = True

用pytorch来实现,尽量做到每层保留的节点为keep_prob * 节点数


In [2]:
import torch
import copy
w1 = torch.randn(4, 4)  # 某层的weights
w = copy.deepcopy(w1)
w


Out[2]:
tensor([[ 0.1620, -0.3637, -1.0468, -0.5357],
        [-0.9185,  1.8240,  1.3015,  0.1691],
        [ 0.8552, -0.1683,  1.2044, -0.3275],
        [ 1.7480,  0.3354, -0.8424, -0.1835]])

In [3]:
def dropout_strict(w, keep_prob):
    """implement inverted dropout ensuring that the share of kept neurons is strictly keep_prob.
    
    Args:
        w (torch.tensor) : weights before dropout
        keep_prob(float) : keep probability
    """
    k = round(w.shape[1] * keep_prob)
    _, indices = torch.topk(torch.randn(w.shape[0], w.shape[1]), k)  
    keep = torch.zeros(4, 4).scatter_(dim=1, index=indices, src=torch.ones_like(w))  
    w *= keep
    w /= keep_prob

In [4]:
if do_dropout:
    dropout_strict(w, keep_prob)
    print(w)


tensor([[ 0.0000, -0.0000, -2.0936, -1.0714],
        [-0.0000,  3.6479,  2.6031,  0.0000],
        [ 1.7104, -0.0000,  2.4087, -0.0000],
        [ 3.4961,  0.0000, -1.6848, -0.0000]])

用numpy来实现,比较简单,当节点数量大时,随机的结果基本能够保证实际保留情况符合保留概率


In [23]:
import numpy as np
import copy 
w1 = np.random.randn(4, 4)  # 某层的weights
w = copy.deepcopy(w1)
w


Out[23]:
array([[-0.10960301,  0.03820597, -1.5917774 , -0.65663746],
       [-0.10249081,  0.90052467,  1.39159973,  1.35458563],
       [-1.57200004, -0.04055162, -1.01190991,  0.54858   ],
       [ 0.93414654,  0.52328988,  1.34527992, -0.11895916]])

In [24]:
def dropout_loose(w, keep_prob):
    """A simple Implementation of inverted dropout.
    
    Args:
        w(np.array) :- neurons subject to dropout
        keep_prob(float) :- keep probability
    """
    keep =  np.random.rand(w.shape[0], w.shape[1]) < keep_prob
    w *= keep
    w /= keep_prob

In [25]:
if do_dropout:
    dropout_loose(w, keep_prob)
    print(w)


[[-0.10960301  0.03820597 -1.5917774  -0.65663746]
 [-0.10249081  0.90052467  1.39159973  1.35458563]
 [-1.57200004 -0.04055162 -1.01190991  0.54858   ]
 [ 0.93414654  0.52328988  1.34527992 -0.11895916]]
[[0.13311383 0.65956675 0.46623683 0.65456941]
 [0.09542607 0.2339979  0.436658   0.67150385]
 [0.39182677 0.73311865 0.31554844 0.94106821]
 [0.86621335 0.78854791 0.67639935 0.81278023]]
0.5
[[ True False  True False]
 [ True  True  True False]
 [ True False  True False]
 [False False False False]]
[[-0.10960301  0.         -1.5917774  -0.        ]
 [-0.10249081  0.90052467  1.39159973  0.        ]
 [-1.57200004 -0.         -1.01190991  0.        ]
 [ 0.          0.          0.         -0.        ]]
[[-0.21920601  0.         -3.1835548  -0.        ]
 [-0.20498163  1.80104933  2.78319947  0.        ]
 [-3.14400009 -0.         -2.02381982  0.        ]
 [ 0.          0.          0.         -0.        ]]

In [ ]: