如何在PyTorch中完全连接批处理规范?

时间:2017-11-09 09:13:51

标签: python neural-network deep-learning pytorch batch-normalization

torch.nn有类BatchNorm1dBatchNorm2dBatchNorm3d,但它没有完全连接的BatchNorm类?在PyTorch中执行普通Batch Norm的标准方法是什么?

2 个答案:

答案 0 :(得分:13)

确定。我想到了。 BatchNorm1d也可以处理Rank-2张量,因此可以将BatchNorm1d用于正常的完全连接案例。

例如:

import torch.nn as nn


class Policy(nn.Module):
def __init__(self, num_inputs, action_space, hidden_size1=256, hidden_size2=128):
    super(Policy2, self).__init__()
    self.action_space = action_space
    num_outputs = action_space

    self.linear1 = nn.Linear(num_inputs, hidden_size1)
    self.linear2 = nn.Linear(hidden_size1, hidden_size2)
    self.linear3 = nn.Linear(hidden_size2, num_outputs)
    self.bn1 = nn.BatchNorm1d(hidden_size1)
    self.bn2 = nn.BatchNorm1d(hidden_size2)

def forward(self, inputs):
    x = inputs
    x = self.bn1(F.relu(self.linear1(x)))
    x = self.bn2(F.relu(self.linear2(x)))
    out = self.linear3(x)


    return out

答案 1 :(得分:5)

BatchNorm1d通常位于ReLU之前,并且偏差是多余的,因此

import torch.nn as nn

class Policy(nn.Module):
def __init__(self, num_inputs, action_space, hidden_size1=256, hidden_size2=128):
    super(Policy2, self).__init__()
    self.action_space = action_space
    num_outputs = action_space

    self.linear1 = nn.Linear(num_inputs, hidden_size1, bias=False)
    self.linear2 = nn.Linear(hidden_size1, hidden_size2, bias=False)
    self.linear3 = nn.Linear(hidden_size2, num_outputs)
    self.bn1 = nn.BatchNorm1d(hidden_size1)
    self.bn2 = nn.BatchNorm1d(hidden_size2)

def forward(self, inputs):
    x = inputs
    x = F.relu(self.bn1(self.linear1(x)))
    x = F.relu(self.bn2(self.linear2(x)))
    out = self.linear3(x)

    return out