火炬模型精度测试

时间:2018-09-05 02:16:25

标签: python conv-neural-network pytorch

我正在使用Pytorch对一系列图像进行分类。 NN的定义如下:

model = models.vgg16(pretrained=True)
model.cuda()
for param in model.parameters(): param.requires_grad = False

classifier = nn.Sequential(OrderedDict([
                           ('fc1', nn.Linear(25088, 4096)),
                           ('relu', nn.ReLU()),
                           ('fc2', nn.Linear(4096, 102)),
                           ('output', nn.LogSoftmax(dim=1))
                           ]))

model.classifier = classifier

标准和优化器如下:

criterion = nn.NLLLoss()
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)

我的验证功能如下:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0
    for images, labels in testloader:

        images.resize_(images.shape[0], 784)

        output = model.forward(images)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

这是引发以下错误的代码段:

  

RuntimeError:输入的尺寸小于预期的尺寸

epochs = 3
print_every = 40
steps = 0
running_loss = 0
testloader = dataloaders['test']

# change to cuda
model.to('cuda')

for e in range(epochs):
    running_loss = 0
    for ii, (inputs, labels) in enumerate(dataloaders['train']):
        steps += 1

        inputs, labels = inputs.to('cuda'), labels.to('cuda')

        optimizer.zero_grad()

        # Forward and backward passes
        outputs = model.forward(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()

        if steps % print_every == 0:
            model.eval()
            with torch.no_grad():
                test_loss, accuracy = validation(model, testloader, criterion)

            print("Epoch: {}/{}.. ".format(e+1, epochs),
                  "Training Loss: {:.3f}.. ".format(running_loss/print_every),
                  "Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
                  "Test Accuracy: {:.3f}".format(accuracy/len(testloader)))

            running_loss = 0

有帮助吗?

3 个答案:

答案 0 :(得分:2)

以防万一它可以帮助某人。

如果您没有GPU系统(例如您正在笔记本电脑上进行开发,并且最终将在具有GPU的服务器上进行测试),则可以使用以下方法进行操作:

if torch.cuda.is_available():
        inputs =inputs.to('cuda')
    else:
        inputs = inputs.to('cuda')

此外,如果您想知道为什么有LogSoftmax而不是Softmax,那是因为他使用NLLLoss作为损失函数。您可以了解有关softmax here

的更多信息

答案 1 :(得分:1)

我需要如下更改验证功能:

def validation(model, testloader, criterion):
    test_loss = 0
    accuracy = 0

    for inputs, classes in testloader:
        inputs = inputs.to('cuda')
        output = model.forward(inputs)
        test_loss += criterion(output, labels).item()

        ps = torch.exp(output)
        equality = (labels.data == ps.max(dim=1)[1])
        accuracy += equality.type(torch.FloatTensor).mean()

    return test_loss, accuracy

输入需要转换为'cuda':inputs.to('cuda')

答案 2 :(得分:0)

您可以在下面找到另一种验证方法,该方法可以在有人要使用GPU建立模型的情况下提供帮助。首先,我们需要创建设备以使用GPU或CPU。从导入火炬模块开始。

import torch
import torch.nn as nn

from torch.utils.data import DataLoader

然后创建设备:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

我们将在数据上使用此设备。我们可以使用以下方法计算模型的准确性。

def check_accuracy(test_loader: DataLoader, model: nn.Module, device):
    num_correct = 0
    total = 0
    model.eval()

    with torch.no_grad():
        for data, labels in test_loader:
            data = data.to(device=device)
            labels = labels.to(device=device)

            predictions = model(data)
            num_correct += (predictions == labels).sum()
            total += labels.size(0)

        print(f"Test Accuracy of the model: {float(num_correct)/float(total)*100:.2f}")