下面是我进行微调的结果。
Training Loss Valid. Loss Valid. Accur. Training Time Validation Time
epoch
1 0.16 0.11 0.96 0:02:11 0:00:05
2 0.07 0.13 0.96 0:02:19 0:00:05
3 0.03 0.14 0.97 0:02:22 0:00:05
4 0.02 0.16 0.96 0:02:21 0:00:05
接下来,我尝试使用该模型从csv文件中预测标签。我创建了一个标签列,将类型设置为int64并运行预测。
print('Predicting labels for {:,} test sentences...'.format(len(input_ids)))
model.eval()
# Tracking variables
predictions , true_labels = [], []
# Predict
for batch in prediction_dataloader:
# Add batch to GPU
batch = tuple(t.to(device) for t in batch)
# Unpack the inputs from our dataloader
b_input_ids, b_input_mask, b_labels = batch
# Telling the model not to compute or store gradients, saving memory and
# speeding up prediction
with torch.no_grad():
# Forward pass, calculate logit predictions
outputs = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = outputs[0]
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Store predictions and true labels
predictions.append(logits)
true_labels.append(label_ids)
但是,尽管我可以打印出预测[4.235,-4.805]等和true_labels [NaN,NaN .....],但我无法实际获得预测的标签{0或1} 。我在这里想念东西吗?
答案 0 :(得分:2)
模型的输出为logits,即使用softmax进行归一化之前的概率分布。
如果获取输出:[4.235, -4.805]
并对其运行softmax
In [1]: import torch
In [2]: import torch.nn.functional as F
In [3]: F.softmax(torch.tensor([4.235, -4.805]))
Out[3]: tensor([9.9988e-01, 1.1856e-04])
对于标签0,您将获得99%的概率分数。将logits作为2D张量时,可以通过调用轻松获得类
logits.argmax(0)
NaN
中的true_labels
s值可能是加载数据时的错误,与BERT模型无关。