分类图像的准确性没有增加

时间:2020-07-25 10:04:55

标签: deep-learning pytorch classification bayesian cnn

我尝试使用Dropout使用贝叶斯CNN实现图像分类。
我定义了两个类:

  1. 在培训阶段辍学
  2. 没有辍学考试(是否要退出考试?请确认)

当我启动程序时,我重新确认火车/测试的准确性保持稳定,并且不会增加。我看不出是什么问题。
我不知道是因为卷积和池化层参数还是什么?有什么想法。

class Net(nn.Module):
   
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5, padding=2)
        self.fc1 = nn.Linear(16 * 8 * 8, 1024)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 192 * 8 * 8)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

            # Lenet with MCDO
    class Net_MCDO(nn.Module):
        def __init__(self):
            super(Net_MCDO, self).__init__()
            self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
            self.pool = nn.MaxPool2d(2, 2)
            self.conv2 = nn.Conv2d(16, 192, 5, padding=2)
            self.fc1 = nn.Linear(16 * 8 * 8, 120)
            self.fc2 = nn.Linear(120, 84)
            self.fc3 = nn.Linear(84, 10)
            self.dropout = nn.Dropout(p=0.3)

    def forward(self, x):
            x = self.pool(self.dropout(self.conv1(x)))
            x = self.pool(self.dropout(self.conv2(x)))
            x = x.view(-1, 192 * 8 * 8)
            x = F.relu(self.fc1(x))
            x = F.relu(self.fc2(self.dropout(x)))
            x = F.softmax(self.fc3(self.dropout(x)),dim=1)
            return x
    net=Net()
    mcdo=Net_MCDO()
    
    CE = nn.CrossEntropyLoss()
    learning_rate=0.001
    optimizer=optim.SGD(net.parameters(), lr=learning_rate, momentum=0.9)
    epoch_num = 30
    train_accuracies=np.zeros(epoch_num)
    test_accuracies=np.zeros(epoch_num)
    for epoch in range(epoch_num):
        average_loss = 0.0
        total=0
        success=0
        
        for i, data in enumerate(trainloader, 0):
            inputs, labels = data
            inputs, labels = Variable(inputs), Variable(labels)
            
            optimizer.zero_grad()
            outputs = mcdo(inputs)
            loss=CE(outputs, labels)
            loss.backward()
            optimizer.step()
 
            average_loss += loss.item()
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            success += (predicted==labels.data).sum()
 
        train_accuracy = 100.0*success/total
        succes=0
        total=0

        for (inputs, labels) in testloader:
            inputs, labels = Variable(inputs), Variable(labels)
            outputs = net(inputs)
            _,predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            success += (predicted==labels.data).sum()

        test_accuracy = 100.0*success/total
        print(u"epoch{}, average_loss{}, train_accuracy{},                 
          test_accuracy{}".format(
          epoch,
          average_loss/n_batches,
          train_accuracy,
          100*success/total
          ))
            #save
            train_accuracies[epoch] = train_accuracy
            test_accuracies[epoch] = 100.0*success/total

    plt.plot(np.arange(1, epoch_num+1), train_accuracies)
    plt.plot(np.arange(1, epoch_num+1), test_accuracies)
    plt.show()

2 个答案:

答案 0 :(得分:1)

是的,您是正确的:测试时不要使用dropout(也不使用batchnorm)。但是您不必为此创建不同的模型。您可以在训练模式和测试模式之间进行选择。只需创建一个模型,例如'net':

# when training
outputs = net.train()(inputs)

# when testing:
outputs = net.eval()(inputs)

但是无论如何,您都不应该真正使用带有conv层的dropout。只是在最后的密集层上。这可能就是为什么它没有改善的原因。 而且您的架构很小。您的图片有多大?如果它们超过32x32,则可以尝试再添加一层。您也可以尝试从大约0.001的学习率开始,然后每次在某些时期精度没有提高时将其除以2。希望对您有所帮助:)

修改 我只是看到您缺少第二个模型(带辍学)的relu激活,这会引起问题。

答案 1 :(得分:0)

Pytorch将Softmax合并到CrossEntroplyLoss内以实现数值稳定性(以及更好的训练)。因此,您应该删除模型的softmax层。 (在此处查看文档:{​​{3}})。在模型中保留Sofmax层会导致训练速度变慢,并且指标可能会变差,这是因为您两次挤压梯度,因此权重更新的重要性降低了很多。

将您的代码更改为:

 class Net_MCDO(nn.Module):
        def __init__(self):
            super(Net_MCDO, self).__init__()
            self.conv1 = nn.Conv2d(3, 6, 5, padding=2)
            self.pool = nn.MaxPool2d(2, 2)
            self.conv2 = nn.Conv2d(16, 192, 5, padding=2)
            self.fc1 = nn.Linear(16 * 8 * 8, 120)
            self.fc2 = nn.Linear(120, 84)
            self.fc3 = nn.Linear(84, 10)
            self.dropout = nn.Dropout(p=0.3)

    def forward(self, x):
            x = self.pool(F.relu(self.dropout(self.conv1(x))))  # recommended to add the relu
            x = self.pool(F.relu(self.dropout(self.conv2(x))))  # recommended to add the relu
            x = x.view(-1, 192 * 8 * 8)
            x = F.relu(self.fc1(x))
            x = F.relu(self.fc2(self.dropout(x)))
            x = self.fc3(self.dropout(x)) # no activation function needed for the last layer
            return x 

此外,我建议您在每个转换层或线性层之后使用激活函数,例如ReLU()。否则,您将执行一堆可以在单个层中学习的线性运算。

我希望有帮助=)