使用tensorRT运行deeplab v3 +

时间:2018-11-15 13:43:36

标签: tensorflow tensorrt deeplab

我正在尝试使用tensorRT优化Deeplab v3 +模型,但出现以下错误:

    UFF Version 0.5.5
=== Automatically deduced input nodes ===
[name: "ImageTensor"
op: "Placeholder"
attr {
  key: "_output_shapes"
  value {
    list {
      shape {
        dim {
          size: 1
        }
        dim {
          size: -1
        }
        dim {
          size: -1
        }
        dim {
          size: 3
        }
      }
    }
  }
}
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: 1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "Squeeze_1"
op: "Squeeze"
input: "resize_images/ResizeNearestNeighbor"
attr {
  key: "T"
  value {
    type: DT_INT64
  }
}
attr {
  key: "_output_shapes"
  value {
    list {
      shape {
        dim {
          size: 1
        }
        dim {
          size: -1
        }
        dim {
          size: -1
        }
      }
    }
  }
}
attr {
  key: "squeeze_dims"
  value {
    list {
      i: 3
    }
  }
}
]
==========================================

Using output node Squeeze_1
Converting to UFF graph
Warning: No conversion function registered for layer: ResizeNearestNeighbor yet.
Converting resize_images/ResizeNearestNeighbor as custom op: ResizeNearestNeighbor
Warning: No conversion function registered for layer: ExpandDims yet.
Converting ExpandDims_1 as custom op: ExpandDims
Warning: No conversion function registered for layer: Slice yet.
Converting Slice as custom op: Slice
Warning: No conversion function registered for layer: ArgMax yet.
Converting ArgMax as custom op: ArgMax
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_2 as custom op: ResizeBilinear
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting ResizeBilinear_1 as custom op: ResizeBilinear
Traceback (most recent call last):
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\iariav\Anaconda3\envs\tensorflow\Scripts\convert-to-uff.exe\__main__.py", line 9, in <module>
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\bin\convert_to_uff.py", line 89, in main
    debug_mode=args.debug
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py", line 187, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py", line 157, in from_tensorflow
    debug_mode=debug_mode)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "c:\users\iariav\anaconda3\envs\tensorflow\lib\site-packages\uff\converters\tensorflow\converter.py", line 72, in convert_tf2uff_node
    inp_node = tf_nodes[inp_name]
KeyError: 'logits/semantic/biases/read'

据我了解,这是由uff转换器不支持的某些层引起的吗?有没有人成功地将deeplab模型转换为uff? 我在tensorflow中使用了原始的deeplabv3 +模型。

谢谢

2 个答案:

答案 0 :(得分:0)

是的,由于层的支持,有时使特定模型在TensorRT中工作有点棘手。 借助新的TensorRT 5GA,这些是受支持的层(摘自《开发人员指南》):

Tensorflow Supported Layers

请问您可以看到像ResizeNearestNeighborResizeBilinearArgMax这样的层,您最好的方法以及我最终要做的就是将网络移植到特定位置并使用cpp API创建我需要的图层。检查IPluginV2和IPluginCreator,看看是否可以自己实现这些层。

我认为,随着时间的推移,更多的层支持将逐步推出,但是我想如果您迫不及待想尝试一下。

答案 1 :(得分:0)

我已经使用TF-TRT在Jetson Nano上运行了deeplabv3 +模型。根据TensorRT发行说明

Caffe Parser和UFF Parser的弃用-我们将在TensorRT 7中弃用Caffe Parser和UFF Parser。它们将在TensorRT 8的下一个主要版本中进行测试并可以正常工作,但是我们计划在后续的主要版本中删除该支持。 。计划迁移您的工作流程,以使用tf2onnx,keras2onnx或TensorFlow-TensorRT(TF-TRT)进行部署。

使用TF-TRT,我可以获得优化的TensorRT图,即使重新训练我的数据集,它也能成功运行。

此外,如果您使用的版本不支持某些运算符,则对于那些特定的运算符,执行回退到tensorflow。 这意味着在执行中不会有任何错误,只有最优化的级别会更少。

参考:

  1. TF-TRT用户指南:https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#integrate-ovr
  2. Tensorflow博客:https://blog.tensorflow.org/2019/06/high-performance-inference-with-TensorRT.html