将MobileFaceNets_TF预训练的PB文件转换为tflite时出错

时间:2019-05-29 08:05:47

标签: tensorflow

我使用MobileFaceNet_TF

的项目

该项目在arch/pretrained_model的路径中具有预先训练的模型。

我想将the freeze pb file转换为tflite文件,即the script冻结的pb文件

系统信息

我是否编写了自定义代码(与使用TensorFlow中提供的股票示例脚本相反):也许,prelu函数,请参见link 操作系统平台和发行版(例如Linux Ubuntu 16.04):Windows 10 从(源或二进制)安装的TensorFlow:二进制 TensorFlow版本(使用下面的命令):1.13 Python版本:3.7.3 Bazel版本(如果从源代码编译):不编译

  1. arch/pretrained_model的路径中,我使用tflite_conver命令将模型转换为tflite。
tflite_convert  ^
--output_file  MobileFaceNet_9925_9680.tflite  ^
--graph_def_file  MobileFaceNet_9925_9680.pb    ^
--input_arrays  "input"  ^
--input_shapes  "1,112,112,3"  ^
--output_arrays  embeddings  ^
--output_format  TFLITE
  1. 我也使用过toco,但是错误是相同的。
toco ^
--output_file MobileFaceNet_9925_9680.tflite ^
--graph_def_file MobileFaceNet_9925_9680.pb ^
--output_format TFLITE ^
--inference_type FLOAT ^
--inference_input_type FLOAT ^
--input_arrays input ^
--input_shapes 1,112,112,3 ^
--output_arrays embeddings
  1. 我使用python api,错误相同。
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_frozen_graph('arch/pretrained_model/MobileFaceNet_9925_9680.pb',input_arrays=['input'],output_arrays=['embeddings'],input_shapes={'input':[1,112,112,3]})
tflite_model = converter.convert()
open("test.tflite", "wb").write(tflite_model)
  1. 我尝试过tensorflow 2.0,但是没有找到可以导入pb文件的api。

用于重现问题的代码

tflite_convert工具错误信息

λ tflite_convert  ^
More? --output_file  MobileFaceNet_9925_9680.tflite  ^
More? --graph_def_file  MobileFaceNet_9925_9680.pb    ^
More? --input_arrays  "input"  ^
More? --input_shapes  "1,112,112,3"  ^
More? --output_arrays  embeddings  ^
More? --output_format  TFLITE
2019-05-28 15:32:07.380246: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-28 15:32:08.157890: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.341
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.62GiB
2019-05-28 15:32:08.185527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-28 15:32:08.727623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-28 15:32:08.742876: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-05-28 15:32:08.753078: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-05-28 15:32:08.764321: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1365 MB memory) -> physical GPU (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\Scripts\tflite_convert-script.py", line 10, in <module>
    sys.exit(main())
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 442, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 438, in run_main
    _convert_model(tflite_flags)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 191, in _convert_model
    output_data = converter.convert()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\lite.py", line 455, in convert
    **converter_kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\convert.py", line 442, in toco_convert_impl
    input_data.SerializeToString())
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\convert.py", line 205, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-05-28 15:32:13.074828: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-28 15:32:13.078767: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-05-28 15:32:13.079077: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-05-28 15:32:13.079438: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-05-28 15:32:13.079756: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-05-28 15:32:13.087213: E tensorflow/lite/toco/import_tensorflow.cc:2079] tensorflow::ImportGraphDef failed with status: Not found: Op type not registered 'Placeholder' in binary running on DESKTOP-TG1FKM4. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-05-28 15:32:13.434412: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 2747 operators, 4533 arrays (0 quantized)
2019-05-28 15:32:13.816499: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 1810 operators, 3076 arrays (0 quantized)
2019-05-28 15:32:14.122395: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1810 operators, 3076 arrays (0 quantized)
2019-05-28 15:32:14.124627: F tensorflow/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:98] Check failed: other_op->type == OperatorType::kMerge Found BatchNormalization as non-selected output from Switch, but only Merge supported.

toco工具错误信息

toco ^
More? --output_file MobileFaceNet_9925_9680.tflite ^
More? --graph_def_file MobileFaceNet_9925_9680.pb ^
More? --output_format TFLITE ^
More? --inference_type FLOAT ^
More? --inference_input_type FLOAT ^
More? --input_arrays input ^
More? --input_shapes 1,112,112,3 ^
More? --output_arrays embeddings
2019-05-29 09:50:45.125226: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-29 09:50:45.877324: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.341
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 1.62GiB
2019-05-29 09:50:45.899741: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-29 09:50:46.412028: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-29 09:50:46.424631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0
2019-05-29 09:50:46.434346: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N
2019-05-29 09:50:46.441283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 1365 MB memory) -> physical GPU (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\Scripts\toco-script.py", line 10, in <module>
    sys.exit(main())
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 442, in main
    app.run(main=run_main, argv=sys.argv[:1])
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 125, in run
    _sys.exit(main(argv))
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 438, in run_main
    _convert_model(tflite_flags)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\tflite_convert.py", line 191, in _convert_model
    output_data = converter.convert()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\lite.py", line 455, in convert
    **converter_kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\convert.py", line 442, in toco_convert_impl
    input_data.SerializeToString())
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\lite\python\convert.py", line 205, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-05-29 09:50:50.939448: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-05-29 09:50:50.942638: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-05-29 09:50:50.942917: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-05-29 09:50:50.943314: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-05-29 09:50:50.943634: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-05-29 09:50:50.950475: E tensorflow/lite/toco/import_tensorflow.cc:2079] tensorflow::ImportGraphDef failed with status: Not found: Op type not registered 'Placeholder' in binary running on DESKTOP-TG1FKM4. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2019-05-29 09:50:51.309990: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 2747 operators, 4533 arrays (0 quantized)
2019-05-29 09:50:51.724318: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 1810 operators, 3076 arrays (0 quantized)
2019-05-29 09:50:52.042452: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1810 operators, 3076 arrays (0 quantized)
2019-05-29 09:50:52.045060: F tensorflow/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:98] Check failed: other_op->type == OperatorType::kMerge Found BatchNormalization as non-selected output from Switch, but only Merge supported.

python api错误信息

---------------------------------------------------------------------------
ConverterError                            Traceback (most recent call last)
<ipython-input-7-5a3c093c4e77> in <module>
      1 import tensorflow as tf
      2 converter = tf.lite.TFLiteConverter.from_frozen_graph('arch/pretrained_model/MobileFaceNet_9925_9680.pb',input_arrays=['input'],output_arrays=['embeddings'],input_shapes={'input':[1,112,112,3]})
----> 3 tflite_model = converter.convert()
      4 open("test.tflite", "wb").write(tflite_model)

~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/lite.py in convert(self)
    453           input_tensors=self._input_tensors,
    454           output_tensors=self._output_tensors,
--> 455           **converter_kwargs)
    456     else:
    457       result = _toco_convert_graph_def(

~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py in toco_convert_impl(input_data, input_tensors, output_tensors, *args, **kwargs)
    440   data = toco_convert_protos(model_flags.SerializeToString(),
    441                              toco_flags.SerializeToString(),
--> 442                              input_data.SerializeToString())
    443   return data
    444 

~/anaconda3/lib/python3.7/site-packages/tensorflow/lite/python/convert.py in toco_convert_protos(model_flags_str, toco_flags_str, input_data_str)
    203       stderr = _try_convert_to_unicode(stderr)
    204       raise ConverterError(
--> 205           "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
    206   finally:
    207     # Must manually cleanup files.

ConverterError: TOCO failed. See console for info.
2019-05-29 09:43:46.962261: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-05-29 09:43:46.985465: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3499530000 Hz
2019-05-29 09:43:46.987287: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55a9808ffee0 executing computations on platform Host. Devices:
2019-05-29 09:43:46.987353: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-05-29 09:43:47.089108: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-05-29 09:43:47.089794: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55a980904a30 executing computations on platform CUDA. Devices:
2019-05-29 09:43:47.089868: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2019-05-29 09:43:47.090474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6575
pciBusID: 0000:42:00.0
totalMemory: 10.91GiB freeMemory: 9.87GiB
2019-05-29 09:43:47.090496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-05-29 09:43:47.095453: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-05-29 09:43:47.095478: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-05-29 09:43:47.095487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-05-29 09:43:47.095829: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 9601 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:42:00.0, compute capability: 6.1)
2019-05-29 09:43:47.227671: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 2747 operators, 4533 arrays (0 quantized)
2019-05-29 09:43:47.335840: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 1810 operators, 3076 arrays (0 quantized)
2019-05-29 09:43:47.418666: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1810 operators, 3076 arrays (0 quantized)
2019-05-29 09:43:47.419336: F tensorflow/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:98] Check failed: other_op->type == OperatorType::kMerge Found BatchNormalization as non-selected output from Switch, but only Merge supported.
Aborted (core dumped)

1 个答案:

答案 0 :(得分:0)

生成模型时,您需要将可训练和正训练参数设置为false

  1. 将可训练的is-training设置为false。在./nets/MobileFaceNet.py
var name = $("#excel_table1 tr:nth-child(1) td.name").html()
  1. 使用原始的Frozen_graph.py文件和修改后的MobileFaceNet.py并运行此命令来冻结图形
batch_norm_params = {
      'is_training': False,
      'trainable':False,
      'center': True,
      'scale': True,
      'fused': True,
      'decay': 0.995,
      'epsilon': 2e-5,
      # force in-place updates of mean and variance estimates
      'updates_collections': None,
      # Moving averages ends up in the trainable variables collection
      'variables_collections': [ tf.GraphKeys.TRAINABLE_VARIABLES ],
  }
  1. 要在tflite模型中转换Frozen_graph,请使用以下命令。
python freeze_graph.py --pretrained_model arch/pretrained_model --output_file freeze.pb

TensorFlow版本1.14.0

参考 https://github.com/sirius-ai/MobileFaceNet_TF/issues/46