使用tf.trt为Jetson Nano转换TF模型时出错

时间:2019-11-19 18:44:28

标签: tensorflow tensorrt nvidia-jetson nvidia-jetson-nano

我正在尝试在Jetson Nano上将TF 1.14.0 saved_model转换为tensorRT。我已经通过tf.saved_model.save保存了模型,并试图在Nano上进行转换。但是,出现以下错误:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from acoustic_cnn/conv2d_seq_layer/conv3d/kernel:0 incompatible with expected resource.

我在网上看到了讨论此问题的信息,但是没有解决方案对我有用。我尝试过:

  1. 设置tf.keras.backend.set_learning_phase(0)source

  2. 使用is_dynamic_op=True, precision_mode='FP32'source) 仍然会得到错误。

  3. 此外,我正在使用TF Eager,所以我看不到如何修改 建议的here

让我知道您还想做什么?

作为参考,以下是我用于转换的代码,here是指向我的saved_model的链接

转换代码

import numpy as np
import tensorflow as tf
from ipdb import set_trace
from tensorflow.python.compiler.tensorrt import trt_convert as trt

INPUT_SAVED_MODEL_DIR = 'tst'
OUTPUT_SAVED_MODEL_DIR = 'tst_out'

tf.enable_eager_execution()

def load_run_savedmodel():
    mod = tf.saved_model.load_v2('tst')
    inp = tf.convert_to_tensor(np.ones((32, 18, 63, 8)), dtype=tf.float32)
    out = mod(inp)

def convert_savedmodel():

    tf.keras.backend.set_learning_phase(0)

    params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
        # precision_mode='FP16',
        # is_dynamic_op=True
    )

    converter = trt.TrtGraphConverter(input_saved_model_dir=INPUT_SAVED_MODEL_DIR,
                                      is_dynamic_op=True,
                                      precision_mode='FP32'
                                      )

    converter.convert()
    converter.save(OUTPUT_SAVED_MODEL_DIR)

    load_infer_savedmodel()

    return None

def load_infer_savedmodel():
    with tf.Session() as sess:
        # First load the SavedModel into the session
        tf.saved_model.loader.load(
            sess, [tf.saved_model.tag_constants.SERVING], output_saved_model_dir)
        set_trace()
        output = sess.run([output_tensor], feed_dict={input_tensor: input_data})


if __name__ == '__main__':
    convert_savedmodel()
    # load_infer_savedmodel()

0 个答案:

没有答案