Pytorch深度卷积网络无法在CIFAR10上收敛

时间:2019-05-01 10:33:45

标签: python deep-learning pytorch

我从PyTorch教程复制了CIFAR10示例网络,并添加了更多层,包括BN。即使经过45个纪元,该网络仍可以在测试集上实现68%的分类精度。

该网络包括:

  • 2个具有3x3内核的卷积层(输入大小从32px减少到28px)
  • 一个最大池化层(输入大小从28px减少到14px)
  • 3个具有3x3内核的卷积层(输入大小从14px减少到8px)
  • 具有3层256-> 256-> 10个神经元的完全连接的网络
  • 批处理归一化应用于除最后一个FC层以外的所有层,包括卷积层
  • Relu应用于所有卷积层和所有隐藏的FC层

我是否正确地构建/使用了任何东西?

import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1_1 = nn.Conv2d(3, 16, 3)  # 32 -> 30
        self.bn1_1 = nn.BatchNorm2d(16)
        self.conv1_2 = nn.Conv2d(16, 16, 3) # 30 - > 28
        self.bn1_2 = nn.BatchNorm2d(16)
        self.pool = nn.MaxPool2d(2, 2)  # 28 -> 14
        self.conv2_1 = nn.Conv2d(16, 16, 3) # 14 -> 12
        self.bn2_1 = nn.BatchNorm2d(16)
        self.conv2_2 = nn.Conv2d(16, 16, 3) # 12 -> 10
        self.bn2_2 = nn.BatchNorm2d(16)
        self.conv2_3 = nn.Conv2d(16, 16, 3) # 10 -> 8
        self.bn2_3 = nn.BatchNorm2d(16)
        self.fc1 = nn.Linear(16 * 8 * 8, 256)
        self.bn4 = nn.BatchNorm1d(256)
        self.fc2 = nn.Linear(256, 256)
        self.bn5 = nn.BatchNorm1d(256)
        self.fc3 = nn.Linear(256, 10)

    def forward(self, x):
        x = F.relu(self.bn1_1(self.conv1_1(x)))
        x = self.pool(F.relu(self.bn1_2(self.conv1_2(x))))
        x = F.relu(self.bn2_1(self.conv2_1(x)))
        x = F.relu(self.bn2_2(self.conv2_2(x)))
        x = F.relu(self.bn2_3(self.conv2_3(x)))
        x = x.view(-1, 16 * 8 * 8)
        x = F.relu(self.bn4(self.fc1(x)))
        x = F.relu(self.bn5(self.fc2(x)))
        x = self.fc3(x)
        return x

net = Net()
device = 'cuda:0'
net.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)

transform = transforms.Compose(
        [transforms.ToTensor(),
         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                            download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=8,
                                              shuffle=True, num_workers=2)

for epoch in range(128):  # loop over the dataset multiple times
    for i, data in enumerate(trainloader, 0):
        # get the inputs
        inputs, labels = data
        inputs, labels = inputs.to(device), labels.to(device)

        # zero the parameter gradients
        optimizer.zero_grad()

        # forward + backward + optimize
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

注意:添加了“ Python”标签,以便突出显示代码

注意:更新了forward方法以在隐藏的FC层上应用F.relu

1 个答案:

答案 0 :(得分:0)

在最后一层使用S型激活。