简单转换网

时间:2019-09-11 08:49:43

标签: pytorch

我是pytorch的新手,我希望您能在尝试设置的小网上打开灯。


class PzConv2d(nn.Module):
    """ Convolution 2D Layer followed by PReLU activation
    """
    def __init__(self, n_in_channels, n_out_channels, **kwargs):
        super(PzConv2d, self).__init__()
        self.conv = nn.Conv2d(n_in_channels, n_out_channels, bias=True,
                            **kwargs)
        self.activ = nn.ReLU()

    def forward(self, x):
        x = self.conv(x)
        return self.activ(x)


class PzPool2d(nn.Module):
    """ Average Pooling Layer
    """
    def __init__(self, kernel_size, stride, padding=0):
        super(PzPool2d, self).__init__()
        self.pool = nn.AvgPool2d(kernel_size=kernel_size,
                                 stride=stride,
                                 padding=padding,
                                 ceil_mode=True,
                                 count_include_pad=False)

    def forward(self, x):
        return self.pool(x)

class PzFullyConnected(nn.Module):
    """ Dense or Fully Connected Layer followed by ReLU
    """
    def __init__(self, n_inputs, n_outputs, withrelu=True, **kwargs):
        super(PzFullyConnected, self).__init__()
        self.withrelu = withrelu
        self.linear = nn.Linear(n_inputs, n_outputs, bias=True)
        self.activ = nn.ReLU()

    def forward(self, x):
        x = self.linear(x)
        if self.withrelu:
            x = self.activ(x)
        return x


class NetCNN(nn.Module):
    def __init__(self,n_input_channels,debug=False):
        super(NetCNN, self).__init__()

        self.n_bins = 180
        self.debug = debug
        self.conv0 = PzConv2d(n_in_channels=n_input_channels,
                              n_out_channels=64,
                              kernel_size=5,padding=2)
        self.pool0 = PzPool2d(kernel_size=2,stride=2,padding=0)

        self.conv1 = PzConv2d(n_in_channels=64,
                              n_out_channels=92,
                              kernel_size=3,padding=2)
        self.pool1 = PzPool2d(kernel_size=2,stride=2,padding=0)

        self.conv2 = PzConv2d(n_in_channels=92,
                              n_out_channels=128,
                              kernel_size=3,padding=2)
        self.pool2 = PzPool2d(kernel_size=2,stride=2,padding=0)


        self.fc0 = PzFullyConnected(n_inputs=12800,n_outputs=1024)
        self.fc1 = PzFullyConnected(n_inputs=1024,n_outputs=self.n_bins)

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

    def forward(self, x, dummy):
        # x:image tensor N_batch, Channels, Height, Width
        #    size N, Channels:=5 filtres, H,W = 64 pixels
        # dummy: is not used

        # stage 0 conv 64 x 5x5
        x = self.conv0(x)
        x = self.pool0(x)

        # stage 1 conv 92 x 3x3
        x = self.conv1(x)
        x = self.pool1(x)

        # stage 2 conv 128 x 3x3
        x = self.conv2(x)
        x = self.pool2(x)

        x = self.fc0(x.view(-1,self.num_flat_features(x)))
        x = self.fc1(x)

        output = x


        return output

我检查了正向过程中中间“ x”张量的尺寸是否正确(至少当我发送随机输入图像张量时)。但是,如果您看到奇怪的事情,请告诉我。

现在,我已经在forward方法中看到了带有“功能”序列的代码,而不是像我那样声明不同的层。有区别吗?

(请注意,我正在使用F.cross_entropy作为损失函数,因此我并没有通过SoftMax结束网络的连接。)

谢谢。

1 个答案:

答案 0 :(得分:1)

从pytorch doc中可以看到,有"layers""functionals"。您已经注意到它们非常相似,存在区别:
一层通常不仅仅是“功能”层,它还包裹了可训练参数
因此,您可以在F.conv2d(...)函数中使用forward(),但是必须手动为此卷积提供(存储/更新)权重/内核。另一方面,如果您使用的是nn.Conv2d,pytorch会为您管理/存储/更新权重/内核。
有些图层没有内部参数/缓冲区(例如nn.ReLUnn.Softmax等),因此您可以选择是否要为此操作使用一个“图层”,或者只在{ {1}}功能。这是方便和习惯的问题,这取决于您。