移动到GPU时,张量类型不匹配

时间:2017-08-01 20:18:51

标签: python deep-learning pytorch

尝试将网络和张量移动到GPU时,我遇到以下错误。我已经检查过网络参数是否已移动到GPU并检查每个批次的张量并移动它们(如果它们尚未在GPU上)。但我仍然得到这个问题,说张量类型不匹配 - 一个是torch.cuda.FloatTensor,另一个是torch.FloatTensor?有人能告诉我我做错了什么吗?感谢。

我的代码:

class Train():
  def __init__(self, network, training, address):
    self.network    = network
    self.address    = address
    self.batch_size = training['batch_size']
    self.iterations = training['iterations']
    self.samples    = training['samples']
    self.data       = training['data']
    self.lr         = training['lr']
    self.noisy_lr   = training['nlr']
    self.cuda       = training['cuda']
    self.save       = training['save']
    self.scale      = training['scale']
    self.limit      = training['limit']
    self.replace    = training['strategy']
    self.optimizer  = torch.optim.Adam(self.network.parameters(), lr=self.lr)

  def tensor_to_Variable(self, t):
    if next(self.network.parameters()).is_cuda and not t.is_cuda:
        t = t.cuda()

    return Variable(t)

  def train(self):
    if self.cuda:
        self.network.cuda()
    dh = DataHandler(self.data)
    loss_fn = torch.nn.MSELoss()
    losses    = []
    validate  = []
    val_size  = 100
    val_diff  = 1
    total_val = float(val_size * self.batch_size)
    hypos     = []
    labels    = []

    # training loop
    for i in range(self.iterations):
        x, y = dh.get_batch(self.batch_size)
        x = self.tensor_to_Variable(x)
        y = self.tensor_to_Variable(y)

        self.optimizer.zero_grad()
        hypo = self.network(x)
        loss = loss_fn(hypo, y)
        loss.backward()
        self.optimizer.step()


class Feedforward(nn.Module):
   def __init__(self, topology):
    super(Feedforward, self).__init__()
    self.input_dim     = topology['features']
    self.num_hidden    = topology['hidden_layers']
    self.hidden_dim    = topology['hidden_dim']
    self.output_dim    = topology['output_dim']
    self.input_layer   = nn.Linear(self.input_dim, self.hidden_dim)
    self.hidden_layer  = nn.Linear(self.hidden_dim, self.hidden_dim)
    self.output_layer  = nn.Linear(self.hidden_dim, self.output_dim)
    self.dropout_layer = nn.Dropout(p=0.2)


def forward(self, x):
    batch_size = x.size()[0]
    feat_size  = x.size()[1]
    input_size = batch_size * feat_size

    self.input_layer = nn.Linear(input_size, self.hidden_dim)
    hidden = self.input_layer(x.view(1, input_size)).clamp(min=0)

    for _ in range(self.num_hidden):
        hidden = self.dropout_layer(F.relu(self.hidden_layer(hidden)))

    output_size = batch_size * self.output_dim
    self.output_layer = nn.Linear(self.hidden_dim, output_size)
    return self.output_layer(hidden).view(output_size)

错误:

Traceback (most recent call last):
  File "/media/project/train.py", line 78, in train
    hypo = self.network(x)

 * (torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
 * (float beta, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
      didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)
 * (float beta, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
      didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)

堆栈跟踪:

Traceback (most recent call last):
File "smpl.py", line 90, in <module>
main()
File "smpl.py", line 80, in main
trainer.train()
File "/media/mpl/temp/train.py", line 82, in train
hypo = self.network(x)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(input, **kwargs)
File "model/network.py", line 35, in forward
hidden = self.input_layer(x.view(1, input_size)).clamp(min=0)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 206, in call
result = self.forward(input, *kwargs)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/linear.py", line 54, in forward
return self.backend.Linear()(input, self.weight, self.bias)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/_functions/linear.py", line 10, in forward
output.addmm(0, 1, input, weight.t())
TypeError: addmm_ received an invalid combination of arguments - got (int, int, torch.cuda.FloatTensor, torch.FloatTensor), but expected one of: (torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
(float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2) (float beta, float alpha, torch.cuda.FloatTensor mat1, torch.cuda.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor)
* (float beta, float alpha, torch.cuda.sparse.FloatTensor mat1, torch.cuda.FloatTensor mat2)
didn't match because some of the arguments have invalid types: (int, int, torch.cuda.FloatTensor, torch.FloatTensor

2 个答案:

答案 0 :(得分:5)

这种情况正在发生,因为您在self.input_layer函数中重新初始化 forward()

调用self.network.cuda()将所有模型参数移动到cuda中。这意味着您在创建FeedForward对象时初始化的任何和所有图层都将移动到cuda内存中。但是当您在self.input_layer函数中重新初始化 forward()时,您会在cpu而不是gpu中初始化该图层的参数。同样适用于self.output_layer

答案 1 :(得分:2)

首先,要使用GPU进行计算,您必须将数据类型准备为CUDA张量。

在这种情况下,可以简单地完成如下操作。

dtype=torch.cuda.FloatTensor
x=torch.autograd.Variable(x.type(dtype))

您可以在tensor_to_Variable函数中根据此进行更改。

其次, 要指定您希望“网络”期望CUDA张量, network.cuda()会有所帮助。

最后,虽然这不是您问题的一部分,但在配置前馈网络时无需指定批量大小。 为了阐明,

1)正面传球:

def forward(self,x):
    x=self.input_layer(x)
    x=self.middle_layer(x)
    x=self.output_layer(x)
    return x

2)网络初始化

def__init__(self,feature_size,hidden_size,output_size):
     self.input_layer=nn.Linear(feature_size,hidden_size)
     self.middle_layer=nn.Linear(hidden_size,hidden_size)
     self.output_layer=nn.Linear(hidden_size,output_size)

3)在打包到CUDA变量

之前预处理数据

your_tensor.view(batch_size,feature_size)

希望这有帮助!