我正在尝试使用BERT进行参数挖掘,并尝试确定参数组成部分(BIO分类任务)。我密切关注了这篇文章Named Entity Recognition With Bert中的代码,但对其进行了调整以适合我的数据。我在colab上运行代码(硬件加速器设置为GPU)。
有人知道我的问题的解决方案吗?预先感谢!
epochs = 5
max_grad_norm = 1.0
for _ in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tags_vals[p_i] for p in predictions for p_i in p]
valid_tags = [tags_vals[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
尽管我严格遵循了原始代码,但它似乎对我不起作用,并且我不断遇到此错误...
RuntimeError Traceback (most recent call last)
<ipython-input-45-aa659bbf8fac> in <module>()
12
13 # forward pass
---> 14 loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
15 # backward pass
16 loss.backward()
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1465 # remove once script supports set_grad_enabled
1466 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1468
1469
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 'index'
解决方案: 设置model = model.to(device)并将批处理大小从32减少到4,可以使程序运行而没有任何运行时错误。