将.pb转换为.tflite时自定义实现Dequantize

时间:2018-06-20 06:37:09

标签: android python tensorflow convolutional-neural-network

我正在尝试使用toco将tensorflow lite量化的.pb文件转换为.lite。创建.pb文件的命令是:
retrain.py是herehere

python retrain.py \
--bottleneck_dir=/mobilenet_q/bottlenecks \
--how_many_training_steps=4000 \
--output_graph=/mobilenet_q/retrained_graph_mobilenet_q_1_224.pb \
--output_labels=/mobilenet_q/retrained_labels_mobilenet_q_1_224.txt \
--image_dir=/data \
--architecture=mobilenet_1.0_224_quantized

当我尝试使用toco命令将.pb文件转换为.tflite时:

bazel run --config=opt //tensorflow/contrib/lite/toco:toco \
  -- --input_file= retrained_graph_mobilenet_q_1_224.pb \
  --output_file= retrained_graph_mobilenet_q_1_224.lite \
  --input_format=TENSORFLOW_GRAPHDEF \
  --output_format=TFLITE \
  --input_shape=1,224,224,3 \
  --input_array=input \
  --output_array=final_result \
  --inference_type=FLOAT \
  --input_data_type=FLOAT

我遇到了错误: Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.toco_convert(). Here is a list of operators for which you will need custom implementations: Dequantize.

我已经在github和stackoverflow中进行了搜索,但是没有得到令人满意的答案。

1 个答案:

答案 0 :(得分:0)

讨论和解决方案是here