TF2.0 lite for Android:将Keras(LSTM)模型转换为tflite

时间:2019-10-11 13:34:23

标签: android tensorflow keras lstm tensorflow2.0

我在LSTM(Keras顺序模型)中使用以下代码

def MyModel_keras():
    model = tf.keras.models.Sequential([
        tf.keras.layers.LSTM(conf.n_hidden_lstm, activation='tanh', return_sequences=False, name='lstm1'),
        tf.keras.layers.Dense(conf.n_dense_1, activation='relu', name='dense1'),
        tf.keras.layers.Dense(conf.num_output_classes, activation='softmax', name='dense2')
    ])

    return model

我尝试了以下方法:

  1. converter = tf.lite.TFLiteConverter.from_keras_model(model)converter.convert() 但是最终得到了
2019-10-11 15:28:56.055596: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2019-10-11 15:28:56.055830: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2019-10-11 15:28:56.058098: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.62
pciBusID: 0000:04:00.0
2019-10-11 15:28:56.058256: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2019-10-11 15:28:56.058283: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2019-10-11 15:28:56.058308: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2019-10-11 15:28:56.058332: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2019-10-11 15:28:56.058354: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2019-10-11 15:28:56.058378: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2019-10-11 15:28:56.058403: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2019-10-11 15:28:56.064157: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-10-11 15:28:56.064296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-10-11 15:28:56.064310: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-10-11 15:28:56.064319: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-10-11 15:28:56.067795: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4956 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:04:00.0, compute capability: 7.5)
2019-10-11 15:28:56.207769: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: graph_to_optimize
2019-10-11 15:28:56.207915: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 41 nodes (0), 47 edges (0), time = 50.459ms.
2019-10-11 15:28:56.207949: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 41 nodes (0), 47 edges (0), time = 15.741ms.
2019-10-11 15:28:56.207976: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: while_body_23965
2019-10-11 15:28:56.208002: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.002ms.
2019-10-11 15:28:56.208028: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.001ms.
2019-10-11 15:28:56.208039: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: while_cond_23964
2019-10-11 15:28:56.208045: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.002ms.
2019-10-11 15:28:56.208053: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0ms.
2019-10-11 15:28:56.208059: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: __inference___backward_cudnn_lstm_with_fallback_24182_24364
2019-10-11 15:28:56.208066: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.002ms.
2019-10-11 15:28:56.208073: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0ms.
2019-10-11 15:28:56.208080: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: __inference_standard_lstm_24070
2019-10-11 15:28:56.208086: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 76 nodes (0), 106 edges (0), time = 2.356ms.
2019-10-11 15:28:56.208093: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 76 nodes (0), 106 edges (0), time = 2.667ms.
2019-10-11 15:28:56.208099: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: __forward_cudnn_lstm_with_fallback_24363
2019-10-11 15:28:56.208106: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.003ms.
2019-10-11 15:28:56.208113: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.001ms.
2019-10-11 15:28:56.208119: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: __inference_cudnn_lstm_with_fallback_24181
2019-10-11 15:28:56.208126: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0.003ms.
2019-10-11 15:28:56.208133: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: function_optimizer did nothing. time = 0ms.
2019-10-11 15:28:56.208139: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:716] Optimization results for grappler item: __inference_standard_lstm_24070_specialized_for_sequential_lstm1_StatefulPartitionedCall_at_graph_to_optimize
2019-10-11 15:28:56.208146: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 72 nodes (0), 102 edges (0), time = 2.321ms.
2019-10-11 15:28:56.208153: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:718]   function_optimizer: Graph size after: 72 nodes (0), 102 edges (0), time = 2.853ms.
Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/home/d1300/no_backup/d1300/tfRC/lib/python3.6/site-packages/tensorflow_core/lite/python/lite.py", line 405, in convert
    self._funcs[0], lower_control_flow=False)
  File "/home/d1300/no_backup/d1300/tfRC/lib/python3.6/site-packages/tensorflow_core/python/framework/convert_to_constants.py", line 414, in convert_variables_to_constants_v2
    function_data = _get_control_flow_function_data(node_defs, tensor_data)
  File "/home/d1300/no_backup/d1300/tfRC/lib/python3.6/site-packages/tensorflow_core/python/framework/convert_to_constants.py", line 262, in _get_control_flow_function_data
    arg_types[idx] = get_resource_type(input_name)
  File "/home/d1300/no_backup/d1300/tfRC/lib/python3.6/site-packages/tensorflow_core/python/framework/convert_to_constants.py", line 228, in get_resource_type
    numpy_type = tensor_data[node_name]["data"].dtype
KeyError: 'kernel'

我也尝试过 converter = tf.lite.TFLiteConverter.from_saved_model('saved_model_folder') 但是由于优化器而最终出现类似的错误。

是否有解决方法? 可以使用TF2.0将.pb文件直接导入Android吗?

2 个答案:

答案 0 :(得分:0)

不幸的是,支持将LSTM,RNN转换为TFLite,因为您可能需要解决的问题是转换包含CNN或其他受支持层的模型。 Here是支持什么和不支持什么的官方文档。

答案 1 :(得分:0)

也许您可以在TensorFlow中将特定的RNN操作转换为TFLite。参见此doc。我们可以使用tf.compat.v1.nn.rnn_cell和本section中提到的其他内容。如前所述,

  

当前,使用tf.compat.v1.nn.static_rnn的RNN模型可以是   只要未指定sequence_length即可成功转换。

此外,它们为动态RNN架构提供了直接替换。参见此section。最好的是,他们提供了Colab notebook。另外,请参考README部分。

注意:文档引用了TensorFlow 1.x版本中包含的API。您可能需要进行一些更改,以将代码迁移到TensorFlow 2.0。