RuntimeError:后端CUDA的预期对象,但参数使用了后端CPU:ret = torch.addmm(torch.jit._unwrap_optional(bias),input,weight.t())

时间:2019-03-21 10:39:32

标签: python pytorch torch tensor

当我的神经网络的forward函数(在训练阶段完成之后)正在执行时,我遇到RuntimeError: Expected object of backend CUDA but got backend CPU for argument #4 'mat1'.错误跟踪表明错误是由于调用{{ 1}}命令。我试图将所有张量数据移至我的GPU。看来我也想搬东西。

这是我尝试过的代码:

output = self.layer1(x)

软件堆栈:

use_cuda = torch.cuda.is_available()
device = torch.device('cuda:0' if use_cuda else 'cpu')

class NeuralNet(nn.Module):

    def __init__(self, input_size, hidden_size, output_size):
        super(NeuralNet, self).__init__()
        self.layer1 = nn.Linear(input_size, hidden_size).cuda(device)
        self.layer2 = nn.Linear(hidden_size, output_size).cuda(device)
        self.relu = nn.ReLU().cuda(device)

    def forward(self, x):
        x.cuda(device)
        output = self.layer1(x)  # throws the error
        output = self.relu(output)
        output = self.layer2(output)
        return output


def main():
    transform = transforms.Compose([
        transforms.ToTensor()
    ])

    mnist_trainset = datasets.MNIST(root='D:\\MNIST', train=True, download=False, transform=transform)
    mnist_testset = datasets.MNIST(root='D:\\MNIST', train=False, download=False, transform=transform)

    train_loader = DataLoader(dataset=mnist_trainset, batch_size=100, shuffle=True)
    test_loader = DataLoader(dataset=mnist_testset, batch_size=100, shuffle=False)

    input_size = 784
    hidden_size = 500
    output_size = 10
    num_epochs = 5

    learning_rate = 0.001

    model = NeuralNet(input_size, hidden_size, output_size)
    model.cuda(device)

    lossFunction = nn.CrossEntropyLoss()
    lossFunction.cuda(device)
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

    losses_in_epochs = []
    total_step = len(train_loader)
    for epoch in range(num_epochs):
        for i, (images, labels) in enumerate(train_loader):
            images = images.to(device)
            labels = labels.to(device)
            images = images.reshape(-1, 28 * 28)

            out = model(images)
            loss = lossFunction(out, labels)

            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            if (i + 1) % 100 == 0:
                print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, i + 1, total_step,
                                                                         loss.item()))

            if (i % 600) == 0:
                losses_in_epochs.append(loss.item())

    with torch.no_grad():
        correct = 0
        total = 0
        for images, labels in test_loader:
            images = images.reshape(-1, 28 * 28)
            out = model(images)
            _, predicted = torch.max(out.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()
            print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))


if __name__ == '__main__':
    main()

1 个答案:

答案 0 :(得分:1)

该错误仅在测试步骤中发生,当您尝试计算准确性时,这可能已经为您提供了提示。训练循环运行没有问题。

错误仅仅是因为您在此步骤中没有将图像和标签发送到GPU。这是您更正的评估循环:

function createChart(labels, data) {
    var myChart = new Chart(ctx, {
        type: 'line',
        data: {
            labels: labels,
            datasets: [{
                label: 'Example',
                data: data,
                borderColor: 'rgba(75, 192, 192, 1)',
                backgroundColor: 'rgba(75, 192, 192, 0.2)',
            }]
        },
    });

}

顺便说一句,您不需要将所有图层分别发送到GPU(在您的课程with torch.no_grad(): correct = 0 total = 0 for images, labels in test_loader: images = images.to(device) # missing line from original code labels = labels.to(device) # missing line from original code images = images.reshape(-1, 28 * 28) out = model(images) _, predicted = torch.max(out.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() 中)。最好一次将整个实例化模型发送到gpu。