为了通过神经网络运行数据集,我需要做哪些转换?

时间:2018-11-30 14:43:15

标签: python neural-network pytorch

我是深度学习和Pytorch的新手,但我希望有人可以帮助我解决这个问题。我的数据集包含不同大小的图像。我正在尝试创建一个可以对图像进行分类的简单神经网络。但是,我遇到了不匹配错误。

神经网络

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, 3)
        self.conv2 = nn.Conv2d(32, 32, 3)
        self.fc1 = nn.Linear(32 * 3 * 3, 200)
        self.fc2 = nn.Linear(200, 120)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        x = F.relu(self.conv2(x))
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return x
net = Net()

我的第一个卷积层有1个输入通道,因为我将图像转换为灰度图像。 32个输出通道是一个任意决定。最终的全连接层具有120个输出通道,因为有120个不同的类。

确定转换并分配训练集和验证集

transform = transforms.Compose(
    [transforms.Grayscale(1),
     transforms.RandomCrop((32,32)),
     transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

data_dir = 'dataset'
full_dataset = datasets.ImageFolder(os.path.join(data_dir, 'train'), transform = transform)

train_size = int(0.8 * len(full_dataset))
val_size = len(full_dataset) - train_size
trainset, valset = torch.utils.data.random_split(full_dataset, [train_size, val_size])

trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
                                           shuffle=True, num_workers=2)
valloader = torch.utils.data.DataLoader(valset, batch_size=4,
                                           shuffle=False, num_workers=2)
classes = full_dataset.classes

我将图像转换为灰度,因为无论如何它们都是灰色的。我将图像裁剪为32,因为图像具有不同的大小,并且我认为通过神经网络将它们都必须具有相同的大小。到目前为止一切正常。

训练神经网络

for epoch in range(2):  # loop over the dataset multiple times

    running_loss = 0.0
    for i, data in enumerate(trainloader, 0):
        # get the inputs
        inputs, labels = data

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        # print statistics
        running_loss += loss.item()
        if i % 2000 == 1999:    # print every 2000 mini-batches
            print('[%d, %5d] loss: %.3f' %
                  (epoch + 1, i + 1, running_loss / 2000))
            running_loss = 0.0

print('Finished Training')

在运行最后一段代码时,出现以下错误:size mismatch, m1: [3584 x 28], m2: [288 x 200] at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/TH/generic/THTensorMath.cpp:2070正在执行以下行:outputs = net(inputs)

我的代码是this Pytorch tutorial中提供的代码的变体。有人可以告诉我我在做什么错吗?

更新

我将神经网络类更新为:

class Net(nn.Module):

    def __init__(self):
        super(Net, self).__init__()
        # 1 input image channel, 6 output channels, 5x5 square convolution
        # kernel
        self.conv1 = nn.Conv2d(1, 6, 5)
        self.conv2 = nn.Conv2d(6, 16, 5)
        # an affine operation: y = Wx + b
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        # Max pooling over a (2, 2) window
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
        # If the size is a square you can only specify a single number
        x = F.max_pool2d(F.relu(self.conv2(x)), 2)
        x = x.view(-1, self.num_flat_features(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]  # all dimensions except the batch dimension
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

net = Net()

但是现在我在loss = criterion(outputs, labels)遇到了一个错误: Assertion cur_target >= 0 && cur_target < n_classes' failed. at /Users/soumith/miniconda2/conda-bld/pytorch_1532623076075/work/aten/src/THNN/generic/ClassNLLCriterion.c:93

1 个答案:

答案 0 :(得分:2)

在第一个配置中,您错误地配置了self.fc1。您需要输入的尺寸为32 * 28 * 28而不是32 * 3 * 3,因为您的图像为32 * 32,并且内核和步幅分别为3和1。有关更简单的说明,请参见this视频。立即尝试自己调整第二个配置,如果不能调整,请在下面评论。