Pytorch模型加载性能

时间:2019-11-23 19:43:28

标签: python pytorch

在这里加载模型的性能似乎正确吗?

Total time: 5.4545 s
Function: load_model at line 81

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
    81                                            @profile
    82                                            def load_model(dirname, device):
    82                                            """ load model from disk """
    83         1         90.0      90.0     0.0       device = torch.device(device)
    84         1         26.0      26.0     0.0       modelfile = os.path.join(dirname, 'model.py')
    85         1          7.0       7.0     0.0       weights = os.path.join(dirname, 'weights_%s.tar' % weights)
    86         1    5379756.0 5379756.0     98.6      model = torch.load(modelfile, map_location=device)
    87         1      72966.0   72966.0     1.3       model.load_state_dict(torch.load(weights, map_location=device))
    88         1       1648.0    1648.0     0.0       model.eval()
    89         1          1.0       1.0     0.0       return model

在第86行加载26MB模型的最初调用比在87加载26MB检查点的速度慢约100倍,而5.45秒要比我期望的要长得多吗?

我正在使用pytorch 1.2和device="cuda",并确认了与1.3.1相同的性能。添加通话以进行同步可确认实际上所有时间都花在了torch.load中。

    86         1    5395545.0 5395545.0     98.6      model = torch.load(modelfile, map_location=device)
    87         1         76.0      76.0      0.0      torch.cuda.synchronize(device=device)
    88         1      72403.0   72403.0      1.3      model.load_state_dict(torch.load(weights, map_location=device))
    89         1         52.0      52.0      0.0      torch.cuda.synchronize(device=device)
    90         1       1640.0    1640.0      0.0      model.eval()
    91         1         21.0      21.0      0.0      torch.cuda.synchronize(device=device)

我在这里做错了吗?或者这是典型的吗?

0 个答案:

没有答案