后端CPU的预期对象,但参数2“源”获得了后端CUDA

时间:2019-06-21 11:18:24

标签: python python-3.x pytorch

我尝试了其他答案,但错误并未得到消除。与我得到的另一个问题的不同之处在于,错误使用的最后一个术语是“ ”,我在任何问题中都没有找到。如果可能的话,也请错误地解释术语“ 来源”。而且在没有CPU的情况下运行代码也可以正常工作。

  

我正在使用启用了GPU的Google Colab。

import torch
from torch import nn
import syft as sy

hook = sy.TorchHook(torch)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model = nn.Sequential(nn.Linear(784,256),
                     nn.ReLU(),
                     nn.Linear(256,128),
                     nn.ReLU(),
                     nn.Linear(128,64),
                     nn.ReLU(),
                     nn.Linear(64,10),
                     nn.LogSoftmax(dim = 1))

model = model.to(device)

输出:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-42-136ec343040a> in <module>()
      8                      nn.LogSoftmax(dim = 1))
      9 
---> 10 model = model.to(device)

3 frames
/usr/local/lib/python3.6/dist-packages/syft/frameworks/torch/hook/hook.py in data(self, new_data)
    368 
    369                 with torch.no_grad():
--> 370                     self.set_(new_data)
    371             return self
    372 

RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'source'

1 个答案:

答案 0 :(得分:1)

此问题与PySyft有关。如您在此Issue#1893中所见,当前workaround的设置如下:

import torch
torch.set_default_tensor_type(torch.cuda.FloatTensor)

import torch之后。

代码:

import torch
from torch import nn
torch.set_default_tensor_type(torch.cuda.FloatTensor)  # <-- workaround

import syft as sy
hook = sy.TorchHook(torch)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)

model = nn.Sequential(nn.Linear(784,256),
                     nn.ReLU(),
                     nn.Linear(256,128),
                     nn.ReLU(),
                     nn.Linear(128,64),
                     nn.ReLU(),
                     nn.Linear(64,10),
                     nn.LogSoftmax(dim = 1))

model = model.to(device)
print(model)

输出:

cuda
Sequential(
  (0): Linear(in_features=784, out_features=256, bias=True)
  (1): ReLU()
  (2): Linear(in_features=256, out_features=128, bias=True)
  (3): ReLU()
  (4): Linear(in_features=128, out_features=64, bias=True)
  (5): ReLU()
  (6): Linear(in_features=64, out_features=10, bias=True)
  (7): LogSoftmax()
)