PyTorch 3重塑错误

时间:2018-04-23 12:12:21

标签: python size reshape pytorch

在Python中使用PyTorch训练CNN时,我收到以下错误:

RuntimeError: invalid argument 2: size '[-3 x 3136]' is invalid for input with 160000 elements at /opt/conda/conda-bld/pytorch-cpu_1515613813020/work/torch/lib/TH/THStorage.c:41

这与下面模型中的x.view行有关:

class Net(nn.Module):

    def __init__(self):
        super(Net,self).__init__()
        self.conv1  = nn.Conv2d(3,32,5,padding=2) # 1 input, 32 out, filter size = 5x5, 2 block outer padding
        self.conv2  = nn.Conv2d(32,64,5,padding=2) # 32 input, 64 out,  filter size = 5x5, 2 block padding
        self.fc1    = nn.Linear(64*7*7,1024) # Fully connected layer 
        self.fc2    = nn.Linear(1024,2) #Fully connected layer 2 out.

    def forward(self,x):
        x = F.max_pool2d(F.relu(self.conv1(x)), 2) # Max pool over convolution with 2x2 pooling 
        x = F.max_pool2d(F.relu(self.conv2(x)), 2) # Max pool over convolution with 2x2 pooling 
        x = x.view(-1,64*7*7) # tensor.view() reshapes the tensor
        x = F.relu(self.fc1(x)) # Activation function after passing through fully connected layer
        x = F.dropout(x, training=True) #Dropout regularisation
        x = self.fc2(x) # Pass through final fully connected layer
        return F.log_softmax(x) # Give results using softmax

model = Net()
print(model) 

我不确定这是因为图像有3个通道还是其他完全不同的结果。我知道这个命令应该将图像重新整形成为完全连接层准备好的单维数组,所以当错误声明输入160000个元素时,我不知道如何解决这个问题。

1 个答案:

答案 0 :(得分:1)

我假设您的输入图片可能大小为200x200pxsize我的意思是height x width,而不考虑频道数量。

虽然您的nn.Conv2d图层定义为输出相同尺寸的张量(conv1有32个频道,con2有64个频道),但F.max_pool2d定义了这样他们将高度和宽度除以2。

因此,在2次最大合并操作后,您的张量大小为200 / (2 * 2) x 200 / (2 * 2) = 50x50px。使用conv2中的64个频道,您可以获得64 * 50 * 50 = 160000元素。

现在,您需要调整view(),以便将形状(batch_size, 64, 50, 50)的输入转换为(batch_size, 64 * 50 * 50)(以保留元素数量)。您需要类似地调整第一个完全连接的层。

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np

class Net(nn.Module):

    def __init__(self):
        super(Net,self).__init__()
        self.conv1  = nn.Conv2d(3,32,5,padding=2) # 1 input, 32 out, filter size = 5x5, 2 block outer padding
        self.conv2  = nn.Conv2d(32,64,5,padding=2) # 32 input, 64 out,  filter size = 5x5, 2 block padding
        self.fc1    = nn.Linear(64*50*50,1024) # Fully connected layer
        self.fc2    = nn.Linear(1024,2) #Fully connected layer 10 out.

    def forward(self,x):
        x = F.max_pool2d(F.relu(self.conv1(x)), 2) # Max pool over convolution with 2x2 pooling
        x = F.relu(self.conv2(x))
        x = F.max_pool2d(x, 2) # Max pool over convolution with 2x2 pooling
        x = x.view(-1,64*50*50) # tensor.view() reshapes the tensor
        x = F.relu(self.fc1(x)) # Activation function after passing through fully connected layer
        x = F.dropout(x, training=True) #Dropout regularisation
        x = self.fc2(x) # Pass through final fully connected layer
        return F.log_softmax(x) # Give results using softmax

model = Net()
print(model)

x = np.ones((1, 3, 200, 200))
x = torch.tensor(x)
x = model.forward(x)
print(x)