参考github:fast-bert
我以前使用Bert模型运行以下笔记本来预测多标签分类,这意味着我不需要GPU驱动程序,而是可以使用CPU内存, 这是jupyter笔记本multilabel
的参考这不是内存问题,如何解决此错误?
随着RAM大小的增加,CPU的数量也会增加,
我选择n1-standard-4(6个vCPU,26 GB内存)机器类型。
示例代码:
我已经删除了使用“ cpu ”而不是“ cuda ”
device = torch.device('cuda')
if torch.cuda.device_count() > 1:
args.multi_gpu = True
else:
args.multi_gpu = False
到
torch.device('cpu')
错误日志:
Traceback (most recent call last):-------------------------------------------------------------| 0.00% [0/63 00:00<00:00]
File "bert/run.py", line 146, in <module>
learner.fit(args.num_train_epochs, args.learning_rate, validate=True)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/fast_bert/learner_cls.py", line 397, in fit
outputs = self.model(**inputs)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/fast_bert/modeling.py", line 205, in forward
logits.view(-1, self.num_labels), labels.view(-1, self.num_labels)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 617, in forward
reduction=self.reduction)
File "/home/pt4_gcp/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 2433, in binary_cross_entropy_with_logits
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([64, 3])) must be the same as input size (torch.Size([32, 3]))
答案 0 :(得分:0)
这似乎对于输入[32]和目标[64]具有不同数量的实例。请确保您具有与输入数量相同的目标变量数量。