TFLiteConverter.from_session的input_tensor和output_tensor给出TypeError(仅当启用急切执行时,Tensor对象才可迭代。)

时间:2019-05-08 15:33:27

标签: python tensorflow keras tensorflow-lite tf.keras

我正在尝试使用量化意识训练来制作模型的tflite文件。我正在用keras制作和训练模型,但是在保存模型(https://github.com/tensorflow/tensorflow/issues/27880)时遇到了麻烦。似乎无法将图形另存为h5文件,然后将其转换为tflite文件,因此我试图直接从会话中保存文件。

这样做时,我得到输入和输出张量不可迭代的错误:

TypeError: Tensor objects are only iterable when eager execution is enabled. To iterate over this tensor use tf.map_fn. 

我尝试在训练后重新定义输入和输出。尽管我可能不是100%不确定输入和输出应该是什么格式,但它没有用。启用急切执行会引发此错误:

RuntimeError: The Session graph is empty.  Add operations to the graph before calling run().

我正在仅使用tensorflow重写它,但是我的代码似乎效率低得多。这段代码在大约3分钟的时间内训练了我仅张量流的代码需要几天的时间。

inputs = tf.keras.Input(shape=(feature_size,))

x = tf.keras.layers.Dense(700, activation='relu')(inputs)
x = tf.keras.layers.Dense(701, activation='relu')(x)
predictions = tf.keras.layers.Dense(1, activation='sigmoid')(x)

model = tf.keras.Model(inputs=inputs, outputs=predictions)

sess = tf.keras.backend.get_session()
tf.contrib.quantize.create_training_graph(sess.graph)
sess.run(tf.global_variables_initializer())

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

model.fit(x_train, y_train, batch_size=32, epochs=4)

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=inputs, output_tensors=predictions) #error here
converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0]: (0., 1.)}
tflite_model = converter.convert()
open("non_seq_lite", "wb").write(tflite_model)

我希望这将构建一个tflite文件,但它会出现上述错误。谢谢您的帮助。

编辑:如果我将其更改为input_tensors并将输出张量更改为[inputs]和[outputs]或model.inputs model.outputs,则会出现此错误:

2019-05-09 08:16:59.344160: W tensorflow/c/c_api.cc:696] Operation '{name:'dense_2/Sigmoid' id:65 op device:{} def:{{{node dense_2/Sigmoid}} = Sigmoid[T=DT_FLOAT](dense_2/act_quant/FakeQuantWithMinMaxVars:0)}}' was changed by updating input tensor after it was run by a session. This mutation will have no effect, and will trigger an error in the future. Either don't modify nodes after running them or create a new session.
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\python\ops\math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/4
2019-05-09 08:17:00.653920: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library cublas64_100.dll locally
336371/336371 [==============================] - 46s 137us/sample - loss: 0.6932 - acc: 0.8533
Epoch 2/4
336371/336371 [==============================] - 46s 136us/sample - loss: 2.1279 - acc: 0.8445
Epoch 3/4
336371/336371 [==============================] - 45s 135us/sample - loss: 2.0752 - acc: 0.8639
Epoch 4/4
336371/336371 [==============================] - 45s 135us/sample - loss: 2.0604 - acc: 0.8700
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\lite.py:591: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\python\framework\graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
Traceback (most recent call last):
  File "non_sequential.py", line 38, in <module>
    tflite_model = converter.convert()
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\lite.py", line 455, in convert
    **converter_kwargs)
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 442, in toco_convert_impl
    input_data.SerializeToString())
  File "C:\Users\samc\venv_gpu\lib\site-packages\tensorflow\lite\python\convert.py", line 205, in toco_convert_protos
    "TOCO failed. See console for info.\n%s\n%s\n" % (stdout, stderr))
tensorflow.lite.python.convert.ConverterError: TOCO failed. See console for info.
2019-05-09 08:20:03.997025: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:03.997372: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "CPU"') for unknown op: WrapDatasetVariant
2019-05-09 08:20:03.997531: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "WrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: WrapDatasetVariant
2019-05-09 08:20:03.997762: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "CPU"') for unknown op: UnwrapDatasetVariant
2019-05-09 08:20:03.997927: E tensorflow/core/framework/op_kernel.cc:1325] OpKernel ('op: "UnwrapDatasetVariant" device_type: "GPU" host_memory_arg: "input_handle" host_memory_arg: "output_handle"') for unknown op: UnwrapDatasetVariant
2019-05-09 08:20:03.998204: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:03.998319: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:03.998464: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:03.998763: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.998998: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999144: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:03.999236: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:03.999437: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.999516: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999684: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:03.999804: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:03.999942: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.000053: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.000189: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.000306: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.000496: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.000615: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.000725: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.000848: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.001003: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001126: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.001253: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.001337: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.001525: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001639: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.001772: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.001893: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.002011: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.002133: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.002273: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.002415: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.002564: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.002677: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.002789: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: Assign
2019-05-09 08:20:04.002912: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: Assign
2019-05-09 08:20:04.003076: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003153: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.003304: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.003384: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.003486: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003663: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.003808: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.003928: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.004027: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignAdd
2019-05-09 08:20:04.004106: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignAdd
2019-05-09 08:20:04.004273: I tensorflow/lite/toco/import_tensorflow.cc:1324] Converting unsupported operation: AssignSub
2019-05-09 08:20:04.004409: I tensorflow/lite/toco/import_tensorflow.cc:1373] Unable to determine output type for op: AssignSub
2019-05-09 08:20:04.006262: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 135 operators, 214 arrays (0 quantized)
2019-05-09 08:20:04.007070: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 93 operators, 154 arrays (0 quantized)
2019-05-09 08:20:04.008023: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 93 operators, 154 arrays (0 quantized)
2019-05-09 08:20:04.015266: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.016352: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.017476: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 08:20:04.018090: F tensorflow/lite/toco/tooling_util.cc:1702] Array dense/act_quant/AssignMinEma/dense/act_quant/min/Pow, which is an input to the Sub operator producing the output array dense/act_quant/AssignMinEma/dense/act_quant/min/sub_2, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.

edit2:为转换器指定默认范围:converter.default_ranges_stats = [-3, 3]除了上述内容外,还会产生以下错误,这可能会有所解释:

2019-05-09 09:01:16.690426: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before default min-max range propagation graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.690807: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After default min-max range propagation graph transformations pass 1: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.691328: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before quantization graph transformations: 53 operators, 108 arrays (1 quantized)
2019-05-09 09:01:16.691469: F tensorflow/lite/toco/graph_transformations/quantize.cc:491] Unimplemented: this graph contains an operator of type (Unsupported TensorFlow op: Assign) for which the quantized form is not yet implemented. Sorry, and patches welcome (that's a relatively fun patch to write, mostly providing the actual quantized arithmetic code for this op).

我认为keras仅有一些量化时不支持的操作。我想我只会在张量流中重写它。

2 个答案:

答案 0 :(得分:0)

readinput_tensorsoutput_tensors应该在[]中分配张量名称。

我认为,它应该像这样

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=[inputs], output_tensors=[predictions])

converter = tf.lite.TFLiteConverter.from_session(sess, input_tensors=model.inputs, output_tensors=model.outputs)

答案 1 :(得分:0)

您需要运行此程序吗?

converter = tf.lite.TFLiteConverter.from_session(sess, 
input_tensors=[inputs], output_tensors=[predictions]) #error here
converter.inference_type = tf.lite.constants.QUANTIZED_UINT8
input_arrays = converter.get_input_arrays()
converter.quantized_input_stats = {input_arrays[0]: (0., 1.)}
tflite_model = converter.convert()
open("non_seq_lite", "wb").write(tflite_model)

简单地运行istead怎么样:

converter = tf.lite.TFLiteConverter.from_session(sess, 
input_tensors=[inputs], output_tensors=[predictions])
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

基于此documentation