在以下CNN模型中,重塑如何在完全连接的层之前起作用?

时间:2018-09-21 22:38:10

标签: python deep-learning conv-neural-network pytorch

考虑卷积神经网络(两个卷积层):

class ConvNet(nn.Module):
    def __init__(self, num_classes=10):
        super(ConvNet, self).__init__()
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(16),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.layer2 = nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=5, stride=1, padding=2),
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2, stride=2))
        self.fc = nn.Linear(7*7*32, num_classes)

    def forward(self, x):
        out = self.layer1(x)
        out = self.layer2(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)
        return out

完全连接的层fc将有7*7*32个输入。上面:

out = out.reshape(out.size(0), -1)导致张量为(32, 49)的张量。 这似乎不正确,因为密集层的输入尺寸不同。我在这里想念什么?

[请注意,在Pytorch中,输入格式如下:[N,C,W,H],所以没有。通道数先于图像的宽度和高度]

https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/convolutional_neural_network/main.py#L35-L56

1 个答案:

答案 0 :(得分:1)

如果查看每个图层的输出,就可以轻松了解丢失的内容。

def forward(self, x):
    print ('input', x.size())
    out = self.layer1(x)
    print ('layer1-output', out.size())
    out = self.layer2(out)
    print ('layer2-output', out.size())
    out = out.reshape(out.size(0), -1)
    print ('reshape-output', out.size())
    out = self.fc(out)
    print ('Model-output', out.size())
    return out

test_input = torch.rand(4,1,28,28)
model(test_input)

OUTPUT:

('input', (4, 1, 28, 28))   
('layer1-output', (4, 16, 14, 14))  
('layer2-output', (4, 32, 7, 7))  
('reshape-output', (4, 1568))  
('Model-output', (4, 10))

Conv2d层不会更改张量的高度和宽度。由于步幅和填充仅改变张量的通道。 MaxPool2d层将张量的高度和宽度减半。

inpt    = 4,1,28,28  
conv1_output = 4,16,28,28  
max_output   = 4,16,14,14  
conv2_output = 4,32,14,14  
max2_output  = 4,32,7,7  
reshapeutput = 4,1585 (32*7*7)  
fcn_output   = 4,10