pytorch 摘要因 Huggingface 模型而失败

时间:2021-07-29 13:51:38

标签: python pytorch huggingface-transformers

我想要一个从 Huggingface 下载的 PyTorch 模型的摘要。

我在这里做错了吗?

from torchinfo import summary
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2)
summary(model, input_size=(16, 512))

给出错误:

---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

/usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    257             if isinstance(x, (list, tuple)):
--> 258                 _ = model.to(device)(*x, **kwargs)
    259             elif isinstance(x, dict):

11 frames

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used

/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)
   1530             output_hidden_states=output_hidden_states,
-> 1531             return_dict=return_dict,
   1532         )

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1070 
-> 1071         result = forward_call(*input, **kwargs)
   1072         if _global_forward_hooks or self._forward_hooks:

/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)
    988             inputs_embeds=inputs_embeds,
--> 989             past_key_values_length=past_key_values_length,
    990         )

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1070 
-> 1071         result = forward_call(*input, **kwargs)
   1072         if _global_forward_hooks or self._forward_hooks:

/usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)
    214         if inputs_embeds is None:
--> 215             inputs_embeds = self.word_embeddings(input_ids)
    216         token_type_embeddings = self.token_type_embeddings(token_type_ids)

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1070 
-> 1071         result = forward_call(*input, **kwargs)
   1072         if _global_forward_hooks or self._forward_hooks:

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
    159             input, self.weight, self.padding_idx, self.max_norm,
--> 160             self.norm_type, self.scale_grad_by_freq, self.sparse)
    161 

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2042         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2043     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   2044 

RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)


The above exception was the direct cause of the following exception:

RuntimeError                              Traceback (most recent call last)

<ipython-input-8-4f70d4e6fa82> in <module>()
      5 else:
      6     # Can't get this working
----> 7     summary(model, input_size=(16, 512)) #, device='cpu')
      8     #print(model)

/usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
    190     )
    191     summary_list = forward_pass(
--> 192         model, x, batch_dim, cache_forward_pass, device, **kwargs
    193     )
    194     formatting = FormattingOptions(depth, verbose, col_names, col_width, row_settings)

/usr/local/lib/python3.7/dist-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
    268             "Failed to run torchinfo. See above stack traces for more details. "
    269             f"Executed layers up to: {executed_layers}"
--> 270         ) from e
    271     finally:
    272         if hooks is not None:

RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []

1 个答案:

答案 0 :(得分:1)

最后一行显示的 torchinfo 库 [torchinfo.py] 中有一个错误 [also reported]。当 dtypesNone 时,它默认创建 torch.float 张量,而 forward 模型的 bert 方法使用 torch.nn.embedding,它只期望 {{1 }} 张量。

int/long

如果您尝试将行修改为以下内容,则效果很好。

def process_input(
    input_data: Optional[INPUT_DATA_TYPE],
    input_size: Optional[INPUT_SIZE_TYPE],
    batch_dim: Optional[int],
    device: Union[torch.device, str],
    dtypes: Optional[List[torch.dtype]] = None,
) -> Tuple[CORRECTED_INPUT_DATA_TYPE, Any]:
    """Reads sample input data to get the input size."""

    if input_size is not None:
        if dtypes is None:
            dtypes = [torch.float] * len(input_size)

EDIT(无需更改内部代码的直接解决方案):

dtypes = [torch.int] * len(input_size)

替代:

对于简单的总结,您可以使用 from torchinfo import summary from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2) summary(model, input_size=(2, 512), dtypes=['torch.IntTensor']) 而不是 print(model) 函数。