RuntimeError,使用预先训练的BERT模型在IA tryna上工作

时间:2020-03-06 09:50:20

标签: python tensorflow classification pytorch bert-language-model

您好,这是我的代码的一部分,可以使用预先训练的bert模型进行分类:

    model = BertForSequenceClassification.from_pretrained(
     "bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
     num_labels = 2, # The number of output labels--2 for binary classification.
                    # You can increase this for multi-class tasks.   
     output_attentions = False, # Whether the model returns attentions weights.
     output_hidden_states = False, # Whether the model returns all hidden-states.
    )

...

for step, batch in enumerate(train_dataloader):

b_input_ids = batch[0].to(device)
    b_input_mask = batch[1].to(device)
    b_labels = batch[2].to(device)

outputs = model(b_input_ids, 
                token_type_ids=None,
                attention_mask=b_input_mask, 
                labels=b_labels)

但随后我收到此错误消息:

RuntimeError:参数#1'索引'的期望张量具有标量 键入Long;

但是却得到了torch.IntTensor(在检查嵌入参数时) 因此,我认为我应该将b_input_ids转换为张量,但不知道该怎么做。 在此先感谢大家的帮助!

1 个答案:

答案 0 :(得分:0)

最终成功使用.to(torch.int64)