如何在Pytorch中使用torch.nn.Sequential实现我自己的ResNet?

时间:2019-07-27 04:08:33

标签: machine-learning neural-network deep-learning conv-neural-network pytorch

我想实现一个ResNet网络(或更确切地说,是剩余块),但我真的希望它采用顺序网络形式。

我所说的顺序网络形式是:

## mdl5, from cifar10 tutorial
mdl5 = nn.Sequential(OrderedDict([
    ('pool1', nn.MaxPool2d(2, 2)),
    ('relu1', nn.ReLU()),
    ('conv1', nn.Conv2d(3, 6, 5)),
    ('pool1', nn.MaxPool2d(2, 2)),
    ('relu2', nn.ReLU()),
    ('conv2', nn.Conv2d(6, 16, 5)),
    ('relu2', nn.ReLU()),
    ('Flatten', Flatten()),
    ('fc1', nn.Linear(1024, 120)), # figure out equation properly
    ('relu4', nn.ReLU()),
    ('fc2', nn.Linear(120, 84)),
    ('relu5', nn.ReLU()),
    ('fc3', nn.Linear(84, 10))
]))

但是当然,NN lego块是“ ResNet”。

我知道方程式是这样的:

enter image description here

但是我不确定如何在Pytorch AND Sequential中做到这一点。顺序是我的关键!


交叉发布:

1 个答案:

答案 0 :(得分:0)

您不能仅使用torch.nn.Sequential来完成此操作,因为顾名思义,它需要顺序进行操作,而顾名思义,这些操作是并行进行的。

原则上,您可以像这样轻松地构建自己的block

import torch

class ResNet(torch.nn.Module):
    def __init__(self, module):
        self.module = module

    def forward(self, inputs):
        return self.module(inputs) + inputs

哪些人可以使用以下内容:

model = torch.nn.Sequential(
    torch.nn.Conv2d(3, 32, kernel_size=7),
    # 32 filters in and out, no max pooling so the shapes can be added
    ResNet(
        torch.nn.Sequential(
            torch.nn.Conv2d(32, 32, kernel_size=3),
            torch.nn.ReLU(),
            torch.nn.BatchNorm2d(32),
            torch.nn.Conv2d(32, 32, kernel_size=3),
            torch.nn.ReLU(),
            torch.nn.BatchNorm2d(32),
        )
    ),
    # Another ResNet block, you could make more of them
    # Downsampling using maxpool and others could be done in between etc. etc.
    ResNet(
        torch.nn.Sequential(
            torch.nn.Conv2d(32, 32, kernel_size=3),
            torch.nn.ReLU(),
            torch.nn.BatchNorm2d(32),
            torch.nn.Conv2d(32, 32, kernel_size=3),
            torch.nn.ReLU(),
            torch.nn.BatchNorm2d(32),
        )
    ),
    # Pool all the 32 filters to 1, you may need to use `torch.squeeze after this layer`
    torch.nn.AdaptiveAvgPool2d(1),
    # 32 10 classes
    torch.nn.Linear(32, 10),
)

通常被忽略的事实(当涉及浅层网络时,没有任何实际后果)是应该保留跳过连接而没有任何非线性,例如ReLU或卷积层,这就是您所看到的上方(来源:Identity Mappings in Deep Residual Networks)。