PyTorch:pytorch预训练的BERT模型的随机输出

时间:2019-08-06 17:19:18

标签: python nlp pytorch

当我尝试使用“ pytorch-pretrained-BERT”模型进行尝试问答时,我意识到,每次评估示例时,输出看起来都是随机的,因此是不正确的。我正在使用此tutorial

text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"

(...)

questionAnswering_model = torch.hub.load('huggingface/pytorch-pretrained-BERT', 'bertForQuestionAnswering', 'bert-base-cased')
questionAnswering_model.eval()

# Predict the start and end positions logits
with torch.no_grad():
    start_logits, end_logits = questionAnswering_model(tokens_tensor, segments_tensors)

start = np.argmax(start_logits[0])
end = np.argmax(end_logits[0])

answer = tokens_tensor[start:end]

示例:

  

1:['谁','是','吉姆','他','## nson','?','[[SEP]','吉姆']

     

2:['## nson','was','a','puppet']

我是否以正确的方式使用开始和结束登录来获得答案?我该如何纠正随机数?

提前谢谢

0 个答案:

没有答案