RuntimeError:对于4维权重X,需要4维输入,但获得了尺寸Y的3维输入

时间:2020-06-17 11:12:09

标签: python-3.x runtime-error pytorch cnn

我正在建立一个CNN,以便对EMNIST数据集进行图像分类。

为此,我具有以下数据集:

import scipy .io
emnist = scipy.io.loadmat(DIRECTORY + '/emnist-letters.mat')
data = emnist ['dataset']
X_train = data ['train'][0, 0]['images'][0, 0]
X_train = X_train.reshape((-1,28,28), order='F')

y_train = data ['train'][0, 0]['labels'][0, 0]

X_test = data ['test'][0, 0]['images'][0, 0]
X_test = X_test.reshape((-1,28,28), order = 'F')

y_test = data ['test'][0, 0]['labels'][0, 0]

形状:

  1. X_train =(124800,28,28)
  2. y_train =(124800,1)
  3. X_test =(20800,28,28)
  4. y_test =(20800,1)

请注意,图片是灰度的,因此颜色仅用一个数字表示。

我准备如下:

train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_train), torch.from_numpy(y_train))
test_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X_test), torch.from_numpy(y_test))

train_loader = torch.utils.data.DataLoader(dataset=train_dataset, 
                                           batch_size=batch_size, 
                                           shuffle=True)

test_loader = torch.utils.data.DataLoader(dataset=test_dataset, 
                                          batch_size=batch_size, 
                                          shuffle=False)

我的模型如下:

class CNNModel(nn.Module):
    def __init__(self):
        super(CNNModel, self).__init__()

        self.cnn_layers = Sequential(
            # Defining a 2D convolution layer
            Conv2d(1, 4, kernel_size=3, stride=1, padding=1),
            BatchNorm2d(4),
            ReLU(inplace=True),
            MaxPool2d(kernel_size=2, stride=2),
            # Defining another 2D convolution layer
            Conv2d(4, 4, kernel_size=3, stride=1, padding=1),
            BatchNorm2d(4),
            ReLU(inplace=True),
            MaxPool2d(kernel_size=2, stride=2),
        )

        self.linear_layers = Sequential(
            Linear(4 * 7 * 7, 10)
        )

    # Defining the forward pass    
    def forward(self, x):
        x = self.cnn_layers(x)
        x = x.view(x.size(0), -1)
        x = self.linear_layers(x)
        return x

model = CNNModel()

下面的代码是我用来训练模型的代码的一部分:

for epoch in range(num_epochs):
    for i, (images, labels) in enumerate(train_loader):

    images = Variable(images)
    labels = Variable(labels)

    # Forward pass to get output/logits
    outputs = model(images)

但是,通过执行我的代码,我得到了以下错误:

RuntimeError: Expected 4-dimensional input for 4-dimensional weight [4, 1, 3, 3], but got 3-dimensional input of size [100, 28, 28] instead

因此,当我输入3D时,需要输入4D。我应该怎么做才能得到3D模型而不是4D模型?

Here提出了类似的问题,但是我看不到如何将其转换为我的代码

1 个答案:

答案 0 :(得分:1)

卷积期望输入的大小为 [batch_size,通道,高度,宽度] ,但是您的图像的大小为 [batch_size,高度,宽度] ,< em> channel 维度丢失。灰度用单个通道表示,您已将第一个卷积的in_channels正确设置为1,但是图像没有匹配的尺寸。

您可以使用torch.unsqueeze轻松添加奇异尺寸。

另外,请不要使用Variable,它已被2年前发布的PyTorch 0.4.0弃用,其所有功能已合并到张量中。

for i, (images, labels) in enumerate(train_loader):
    # Add a single channel dimension
    # From: [batch_size, height, width]
    # To: [batch_size, 1, height, width]
    images = images.unsqueeze(1)

    # Forward pass to get output/logits
    outputs = model(images)