将sagemaker模型(MXNet)转换为ONNX:infer_shape错误

时间:2020-01-02 15:02:22

标签: python amazon-sagemaker mxnet onnx

工作

我正在研究鼠尾草jupyter笔记本(环境:anaconda3/envs/mxnet_p36/lib/python3.6)。

我成功运行了本教程:https://github.com/onnx/tutorials/blob/master/tutorials/MXNetONNXExport.ipynb


无法正常工作

然后,在相同的环境下,我尝试将相同的过程应用于由贤者训练工作生成的文件。因此,我将 S3模型工件文件用作输入,更改了教程代码的某些行以满足我的需要。 我使用内置对象检测SSD VGG-16网络,其超参数image_shape:300。

sym = './model_algo_1-symbol.json'
params = './model_algo_1-0000.params'
input_shape = (1,3,300,300)

并将verbose=True作为export_model()方法中的最后一个参数:

converted_model_path = onnx_mxnet.export_model(sym, params, [input_shape], np.float32, onnx_file, True)

运行代码时,我得到了该错误(帖子末尾的详细输出):

MXNetError: Error in operator multibox_target: [14:36:32] src/operator/contrib/./multibox_target-inl.h:224: Check failed: lshape.ndim() == 3 (-1 vs. 3) : Label should be [batch, num_labels, label_width] tensor

问题

到目前为止,我找不到任何解决方案:

  • 也许input_shape = (1,3,300,300)是错误的,但我无法 找出来;
  • 模型可能包含一些意外的图层;

有人知道解决此问题的方法或在本地计算机上使用模型的解决方法吗?
(我的意思是不必部署到AWS)


详细输出

  infer_shape error. Arguments:
  data: (1, 3, 300, 300)
  conv3_2_weight: (256, 256, 3, 3)
  fc7_bias: (1024,)
  multi_feat_3_conv_1x1_conv_weight: (128, 512, 1, 1)
  conv4_1_bias: (512,)
  conv5_3_bias: (512,)
  relu4_3_cls_pred_conv_bias: (16,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_weight: (24, 512, 3, 3)
  relu4_3_loc_pred_conv_bias: (16,)
  relu7_cls_pred_conv_weight: (24, 1024, 3, 3)
  conv3_3_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  conv4_3_weight: (512, 512, 3, 3)
  conv1_2_bias: (64,)
  multi_feat_2_conv_3x3_relu_cls_pred_conv_bias: (24,)
  multi_feat_4_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv4_1_weight: (512, 256, 3, 3)
  relu4_3_scale: (1, 512, 1, 1)
  multi_feat_4_conv_3x3_conv_bias: (256,)
  multi_feat_5_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv2_2_weight: (128, 128, 3, 3)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_5_conv_3x3_conv_bias: (256,)
  conv5_1_bias: (512,)
  multi_feat_3_conv_3x3_conv_bias: (256,)
  conv2_1_bias: (128,)
  conv5_2_weight: (512, 512, 3, 3)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_weight: (16, 256, 3, 3)
  multi_feat_2_conv_3x3_conv_weight: (512, 256, 3, 3)
  multi_feat_2_conv_1x1_conv_bias: (256,)
  multi_feat_2_conv_1x1_conv_weight: (256, 1024, 1, 1)
  conv4_3_bias: (512,)
  relu7_cls_pred_conv_bias: (24,)
  fc6_bias: (1024,)
  conv2_1_weight: (128, 64, 3, 3)
  multi_feat_2_conv_3x3_conv_bias: (512,)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_weight: (24, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_bias: (128,)
  relu7_loc_pred_conv_bias: (24,)
  multi_feat_3_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv3_3_weight: (256, 256, 3, 3)
  conv1_2_weight: (64, 64, 3, 3)
  multi_feat_2_conv_3x3_relu_loc_pred_conv_bias: (24,)
  conv1_1_bias: (64,)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_bias: (16,)
  conv4_2_weight: (512, 512, 3, 3)
  conv5_3_weight: (512, 512, 3, 3)
  relu7_loc_pred_conv_weight: (24, 1024, 3, 3)
  multi_feat_3_conv_3x3_conv_weight: (256, 128, 3, 3)
  conv3_1_weight: (256, 128, 3, 3)
  multi_feat_4_conv_3x3_relu_cls_pred_conv_weight: (16, 256, 3, 3)
  relu4_3_loc_pred_conv_weight: (16, 512, 3, 3)
  multi_feat_5_conv_3x3_conv_weight: (256, 128, 3, 3)
  fc7_weight: (1024, 1024, 1, 1)
  conv4_2_bias: (512,)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_weight: (24, 256, 3, 3)
  multi_feat_3_conv_3x3_relu_cls_pred_conv_bias: (24,)
  conv2_2_bias: (128,)
  conv5_1_weight: (512, 512, 3, 3)
  multi_feat_3_conv_1x1_conv_bias: (128,)
  multi_feat_4_conv_3x3_relu_loc_pred_conv_bias: (16,)
  conv1_1_weight: (64, 3, 3, 3)
  multi_feat_4_conv_1x1_conv_bias: (128,)
  conv3_1_bias: (256,)
  multi_feat_5_conv_3x3_relu_loc_pred_conv_bias: (16,)
  multi_feat_4_conv_1x1_conv_weight: (128, 256, 1, 1)
  fc6_weight: (1024, 512, 3, 3)
  multi_feat_5_conv_1x1_conv_weight: (128, 256, 1, 1)
  conv3_2_bias: (256,)
  conv5_2_bias: (512,)
  relu4_3_cls_pred_conv_weight: (16, 512, 3, 3)

0 个答案:

没有答案