无法使用 TF 2.4.1 将 onnx 模型转换为 tflite

时间:2021-04-22 17:05:55

标签: tensorflow tensorflow-lite onnx

我有一个 ONNX 模型,我可以使用 TF 2.4.1 成功将其转换为 TF。但是当涉及到将该保存的模型转换为 TFLite 时发生错误。

代码:

import onnx
import tensorflow as tf
from onnx_tf.backend import prepare

print(tf.__version__)

# Convert model.onnx to Tensorflow
onnx_model = onnx.load('model.onnx')
onnx.checker.check_model(onnx_model) 
tf_rep = prepare(onnx_model)  
tf_rep.export_graph('model')  

# Convert saved model to tflite
converter = tf.lite.TFLiteConverter.from_saved_model('model')
tf_lite_model = converter.convert()
open('model.tflite', 'wb').write(tf_lite_model)

一切顺利,直到转换步骤结束,如下所示:

 >>> tf_lite_model = converter.convert()
    2021-04-22 18:18:14.715046: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:316] Ignored output_format.
    2021-04-22 18:18:14.715072: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:319] Ignored drop_control_dependency.
    2021-04-22 18:18:14.715078: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:325] Ignored change_concat_input_ranges.
    2021-04-22 18:18:14.716044: I tensorflow/cc/saved_model/reader.cc:32] Reading SavedModel from: model
    2021-04-22 18:18:14.778050: I tensorflow/cc/saved_model/reader.cc:55] Reading meta graph with tags { serve }
    2021-04-22 18:18:14.778083: I tensorflow/cc/saved_model/reader.cc:93] Reading SavedModel debug info (if present) from: model
    2021-04-22 18:18:14.998062: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)
    2021-04-22 18:18:15.043862: I tensorflow/cc/saved_model/loader.cc:206] Restoring SavedModel bundle.
    2021-04-22 18:18:15.438804: I tensorflow/cc/saved_model/loader.cc:190] Running initialization op on SavedModel bundle at path: model
    2021-04-22 18:18:15.809851: I tensorflow/cc/saved_model/loader.cc:277] SavedModel load for tags { serve }; Status: success: OK. Took 1093808 microseconds.
    2021-04-22 18:18:18.757257: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:194] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
    loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): error: operand #0 does not dominate this use
    Traceback (most recent call last):
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 210, in toco_convert_protos
        model_str = wrap_toco.wrapped_toco_convert(model_flags_str,
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/wrap_toco.py", line 32, in wrapped_toco_convert
        return _pywrap_toco_api.TocoConvert(
    Exception: <unknown>:0: error: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
    <unknown>:0: note: loc("PartitionedCall"): called from
    <unknown>:0: note: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here


    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/lite.py", line 739, in convert
        result = _convert_saved_model(**converter_kwargs)
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 632, in convert_saved_model
        data = toco_convert_protos(
      File "/Users/decades/anaconda3/envs/py38/lib/python3.8/site-packages/tensorflow/lite/python/convert.py", line 216, in toco_convert_protos
        raise ConverterError(str(e))
    tensorflow.lite.python.convert.ConverterError: <unknown>:0: error: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand #0 does not dominate this use
    <unknown>:0: note: loc("PartitionedCall"): called from
    <unknown>:0: note: loc(callsite(callsite("Pad_16@__inference___call___16503" at "PartitionedCall@__inference_signature_wrapper_16752") at "PartitionedCall")): operand defined here

    

我不知道这条消息是什么意思,但如果我切换到 TF 2.2,转换会通过而没有错误。不好的是,由于另一个问题,现在初始 ONNX 到 TF 转换失败。

有人知道这条消息是什么意思以及可以用它做什么吗?

TIA

1 个答案:

答案 0 :(得分:0)

可以将您保存的模型目录分享给我吗?我可以帮忙调试。

一般的建议是,有两种可能性

(1) TF Lite 转换器可能无法正确处理保存的模型。

(2) onnx 转换工具可能无法创建有效的 TF 保存模型。

使用最新的 TF 版本(2.5 或 tf-nightly)可能有助于解决 (1) 情况下的此问题,但不能保证。


我确认 tf-nightly 版本可以毫无问题地转换附加的保存模型:

converter = tf.lite.TFLiteConverter.from_saved_model(
      "/tmp/onnx_model")
tflite_model = converter.convert()
with open("/tmp/onnx.tflite", "wb") as f:
  f.write(tflite_model)