RuntimeError:无法运行“ aten :: xxxx”,无法加载使用TPU训练的Pytorch模型

时间:2020-03-04 04:55:30

标签: pytorch

基于日本的BERT模型添加了传销培训。 当时,我们在Google Colab上使用了TPU。 加载创建的模型时出现以下错误。 有没有办法加载模型?

代码

from transformers import BertJapaneseTokenizer, BertForMaskedLM
​
# Load pre-trained model
model = BertForMaskedLM.from_pretrained('/content/drive/My Drive/Bert/models/sample/')
model.eval()

输出

RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    469             try:
--> 470                 state_dict = torch.load(resolved_archive_file, map_location="cpu")
    471             except Exception:
​
/usr/local/lib/python3.6/dist-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
    528                 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 529         return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    530 
​
/usr/local/lib/python3.6/dist-packages/torch/serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
    701     unpickler.persistent_load = persistent_load
--> 702     result = unpickler.load()
    703 
​
/usr/local/lib/python3.6/dist-packages/torch/_utils.py in _rebuild_xla_tensor(data, dtype, device, requires_grad)
    151 def _rebuild_xla_tensor(data, dtype, device, requires_grad):
--> 152     tensor = torch.from_numpy(data).to(dtype=dtype, device=device)
    153     tensor.requires_grad = requires_grad
​
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'XLATensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId].

1 个答案:

答案 0 :(得分:1)

使用变压器时,我遇到了相同的错误,这就是我的解决方法。

经过Colab培训后,我不得不将模型发送到CPU。基本上,运行:

model.to('cpu')

然后保存模型,这使我可以在另一个实例中导入权重。

该错误暗示

RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'XLATensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId]