将PyTorch模型转换为TorchScript时出错

时间:2018-12-17 17:23:34

标签: pytorch torchscript

我正在尝试遵循PyTorch guide to load models in C++

以下示例代码有效:

import torch
import torchvision

# An instance of your model.
model = torchvision.models.resnet18()

# An example input you would normally provide to your model's forward() method.
example = torch.rand(1, 3, 224, 224)

# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, example)

但是,当尝试其他网络(如squeezenet(或alexnet))时,我的代码失败:

sq = torchvision.models.squeezenet1_0(pretrained=True)
traced_script_module = torch.jit.trace(sq, example) 

>> traced_script_module = torch.jit.trace(sq, example)                                      
/home/fabio/.local/lib/python3.6/site-packages/torch/jit/__init__.py:642: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function.
 Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 785] (3.1476082801818848 vs. 3.945478677749634) and 999 other locations (100.00%)
  _check_trace([example_inputs], func, executor_options, module, check_tolerance, _force_outplace)

1 个答案:

答案 0 :(得分:0)

我刚刚发现从torchvision.models加载的模型默认处于训练模式。 AlexNet和SqueezeNet都具有Dropout层,如果在训练模式下,则推论不确定。只需更改为评估模式即可解决此问题:

sq = torchvision.models.squeezenet1_0(pretrained=True)
sq.eval()
traced_script_module = torch.jit.trace(sq, example)