pytorch中的nn.emdding出现问题,标量类型为Long,但是却遇到了torch.cuda.FloatTensor(如何修复)?

时间:2019-10-14 10:28:37

标签: pytorch recurrent-neural-network

所以我有一个RNN编码器,它是一个较大的语言模型的一部分,其中的过程是编码-> rnn->解码。

作为rnn类的 init 的一部分,我有以下内容:

self.encode_this = nn.Embedding(self.vocab_size, self.embedded_vocab_dim)

现在我正在尝试实现一个转发类,该类分批接收并执行编码然后解码,

def f_calc(self, batch):

    #Here, batch.shape[0] is the size of batch while batch.shape[1] is the sequence length

    hidden_states = (torch.zeros(self.num_layers, batch.shape[0], self.hidden_vocab_dim).to(device))
    embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device))

    o1, h = self.encode_this(embedded_states)

但是,我的问题总是出在编码器上,这给了我以下错误:

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   1465         # remove once script supports set_grad_enabled
   1466         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   1468 
   1469 

RuntimeError: Expected tensor for argument #1 'indices' to have scalar type Long; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)

任何人都知道如何解决吗?我对pytorch完全陌生,所以如果这是一个愚蠢的问题,请原谅。我知道涉及某种形式的类型转换,但是我不确定该怎么做...

非常感谢!

1 个答案:

答案 0 :(得分:0)

嵌入层期望输入为整数。

import torch as t

emb = t.nn.Embedding(embedding_dim=3, num_embeddings=26)

emb(t.LongTensor([0,1,2]))

enter image description here

在您的代码中添加long()

embedded_states = (torch.zeros(batch.shape[0],batch.shape[1], self.embedded_vocab_dim).to(device)).long()