因此,我正在使用TensorFlow SSD-Mobilnet V1可可数据集。我已经对自己的数据集进行了进一步的培训,但是当我尝试将其转换为OpenVino IR以便在具有Movidius Chip的Raspberry PI上运行它时。我收到错误消息
➜ utils sudo python3 summarize_graph.py --input_model ssd.pb
WARNING: Logging before flag parsing goes to stderr.
W0722 17:17:05.565755 4678620608 __init__.py:308] Limited tf.compat.v2.summary API due to missing TensorBoard installation.
W0722 17:17:06.696880 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:35: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.
W0722 17:17:06.697348 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:109: The name tf.MetaGraphDef is deprecated. Please use tf.compat.v1.MetaGraphDef instead.
W0722 17:17:06.697680 4678620608 deprecation_wrapper.py:119] From ../../mo/front/tf/loader.py:235: The name tf.NodeDef is deprecated. Please use tf.compat.v1.NodeDef instead.
1 input(s) detected:
Name: image_tensor, type: uint8, shape: (-1,-1,-1,3)
7 output(s) detected:
detection_boxes
detection_scores
detection_multiclass_scores
detection_classes
num_detections
raw_detection_boxes
raw_detection_scores
当我尝试将ssd.pb(冻结模型)转换为OpenVino IR时
➜ model_optimizer sudo python3 mo_tf.py --input_model ssd.pb
Password:
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/ssd.pb
- Path for generated IR: /opt/intel/openvino_2019.1.144/deployment_tools/model_optimizer/.
- IR output name: ssd
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: None
- Operations to offload: None
- Patterns to offload: None
- Use the config file: None
Model Optimizer version: 2019.1.1-83-g28dfbfd
WARNING: Logging before flag parsing goes to stderr.
E0722 17:24:22.964164 4474824128 infer.py:158] Shape [-1 -1 -1 3] is not fully defined for output 0 of "image_tensor". Use --input_shape with positive integers to override model input shapes.
E0722 17:24:22.964462 4474824128 infer.py:178] Cannot infer shapes or values for node "image_tensor".
E0722 17:24:22.964554 4474824128 infer.py:179] Not all output shapes were inferred or fully defined for node "image_tensor".
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #40.
E0722 17:24:22.964632 4474824128 infer.py:180]
E0722 17:24:22.964720 4474824128 infer.py:181] It can happen due to bug in custom shape infer function <function tf_placeholder_ext.<locals>.<lambda> at 0x12ab64bf8>.
E0722 17:24:22.964787 4474824128 infer.py:182] Or because the node inputs have incorrect values/shapes.
E0722 17:24:22.964850 4474824128 infer.py:183] Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
E0722 17:24:22.965915 4474824128 infer.py:192] Run Model Optimizer with --log_level=DEBUG for more information.
E0722 17:24:22.966033 4474824128 main.py:317] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "image_tensor" node.
For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38.
您认为我们应该如何解决?
答案 0 :(得分:1)
我将OpenVINO更新为OpenVINO工具包R2 2019,并使用以下命令能够生成IR文件
python3 ~/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_use_custom_operations_config ~/intel/openvino/deployment_tools/model_optimizer/extension/front/tf/ssd_support_api_v1.14.json --tensorflow_object_detection_api_pipeline_config pipeline.config -b 1 --data_type FP16 --reverse_input_channels
答案 1 :(得分:0)
当您尝试转换ssd.pb(冻结的模型)时,仅将输入模型参数传递给mo_tf.py脚本。要将物体检测模型转换为IR,请转到 到模型优化器目录,使用以下必需参数运行mo_tf.py脚本:
-input_model:
具有预训练模型的文件(冻结后为二进制或文本.pb文件)
-tensorflow_use_custom_operations_config: 描述用于转换特定TensorFlow *拓扑的规则的配置文件。 对于从TensorFlow *对象检测API动物园下载的模型,您可以在/ deployment_tools / model_optimizer / extensions / front / tf目录中找到配置文件 您可以使用ssd_v2_support.json / ssd_support.json-用于来自模型动物园的冻结SSD拓扑。它将在上述目录中提供。
-tensorflow_object_detection_api_pipeline_config: 一个特殊的配置文件,描述了TensorFlow对象检测API模型的拓扑超参数和结构。 对于从TensorFlow *对象检测API动物园下载的模型,配置文件名为pipeline.config。 如果您打算自己训练模型,则可以在模型存储库中找到这些文件的模板
-input_shape(可选):
自定义输入图像的形状,我们需要根据您使用的预训练模型传递这些值。
该模型以[1 H W C]格式获取输入图像,其中该参数分别指批次大小,高度,宽度,通道。
Model Optimizer不接受批次,高度,宽度和通道号的负值。
因此,如果预先知道模型(SSD mobilenet)的输入图像尺寸,则需要使用--input_shape参数传递有效的4个正数集。
如果不可用,则不需要传递输入形状。
openvino提供了一个示例示例mo_tf.py命令,该命令使用从模型下载器下载的SSD-MobileNet-v2-COCO模型。
python mo_tf.py
--input_model "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.frozen.pb"
--tensorflow_use_custom_operations_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\model_optimizer\extensions\front\tf\ssd_v2_support.json"
--tensorflow_object_detection_api_pipeline_config "c:\Program Files (x86)\IntelSWTools\openvino_2019.1.087\deployment_tools\tools\model_downloader\object_detection\common\ssd_mobilenet_v2_coco\tf\ssd_mobilenet_v2_coco.config"
--data_type FP16
--log_level DEBUG
有关更多详细信息,请参见链接https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_Object_Detection_API_Models.html
希望对您有所帮助。
答案 2 :(得分:0)
要转换mobilenetv2 ssd,请在原始ssd_v2_support.json中添加“ Postprocessor / Cast_1”,然后使用以下命令。它应该可以正常工作。
"instances": {
"end_points": [
"detection_boxes",
"detection_scores",
"num_detections"
],
"start_points": [
"Postprocessor/Shape",
"Postprocessor/scale_logits",
"Postprocessor/Tile",
"Postprocessor/Reshape_1",
"Postprocessor/Cast_1"
]
},
然后使用以下命令
#### object detection conversion
import platform
is_win = 'windows' in platform.platform().lower()
mo_tf_path = '/opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py'
json_file = '/opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json'
pb_file = 'model/frozen_inference_graph.pb'
pipeline_file = 'model/pipeline.config'
output_dir = 'output/'
img_height = 300
input_shape = [1,img_height,img_height,3]
input_shape_str = str(input_shape).replace(' ','')
input_shape_str
!python3 {mo_tf_path} --input_model {pb_file} --tensorflow_object_detection_api_pipeline_config {pipeline_file} --tensorflow_use_custom_operations_config {json_file} --output="detection_boxes,detection_scores,num_detections" --output_dir {output_dir} --reverse_input_channels --data_type FP16 --log_level DEBUG