使用toco失败

时间:2018-03-28 18:31:05

标签: tensorflow tensorflow-lite

我尝试按照tensorflow quantization中的说明生成量化的tensorflow lite模型。

首先,我在训练过程中使用tf.contrib.quantize.create_training_graph()和tf.contrib.quantize.create_eval_graph()将伪量化节点插入到图形中,并生成一个冻结的pb文件(model.pb)最后。

其次,我使用以下命令将我的假量化张量流模型转换为量化张量流简化模型。

bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=model.pb \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--output_file=model.tflite \
--inference_type=QUANTIZED_UINT8 --input_shapes=1,1:1,5002 \
--input_arrays=Test/Model/input,Test/Model/apps \
--output_arrays=Test/Model/output_probs,Test/Model/final_state  \
--mean_values=127.5,127.5 --std_values=127.5,127.5 --allow_custom_ops

隐蔽过程失败并显示以下日志:

2018-03-28 18:00:38.348403: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 118 operators, 193 arrays (0 quantized)
2018-03-28 18:00:38.349394: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 118 operators, 193 arrays (0 quantized)
2018-03-28 18:00:38.382854: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 57 operators, 103 arrays (1 quantized)
2018-03-28 18:00:38.384327: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 2: 56 operators, 101 arrays (1 quantized)
2018-03-28 18:00:38.385235: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 3: 55 operators, 100 arrays (1 quantized)
2018-03-28 18:00:38.385995: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 55 operators, 100 arrays (1 quantized)
2018-03-28 18:00:38.386047: W tensorflow/contrib/lite/toco/graph_transformations/hardcode_min_max.cc:131] Skipping min-max setting for {TensorFlowSplit operator with output Test/Model/RNN/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/split} because output Test/Model/RNN/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/split already has min-max.
2018-03-28 18:00:38.386076: W tensorflow/contrib/lite/toco/graph_transformations/hardcode_min_max.cc:131] Skipping min-max setting for {TensorFlowSplit operator with output Test/Model/RNN/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/split} because output Test/Model/RNN/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/split already has min-max.
2018-03-28 18:00:38.386328: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After pre-quantization graph transformations pass 1: 48 operators, 93 arrays (1 quantized)
2018-03-28 18:00:38.386484: W tensorflow/contrib/lite/toco/graph_transformations/hardcode_min_max.cc:131] Skipping min-max setting for {TensorFlowSplit operator with output Test/Model/RNN/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/split} because output Test/Model/RNN/RNN/multi_rnn_cell/cell_1/basic_lstm_cell/split already has min-max.
2018-03-28 18:00:38.386502: W tensorflow/contrib/lite/toco/graph_transformations/hardcode_min_max.cc:131] Skipping min-max setting for {TensorFlowSplit operator with output Test/Model/RNN/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/split} because output Test/Model/RNN/RNN/multi_rnn_cell/cell_0/basic_lstm_cell/split already has min-max.
2018-03-28 18:00:38.386778: F tensorflow/contrib/lite/toco/tooling_util.cc:1432] Array Test/Model/embedding_lookup, which is an input to the TensorFlowReshape operator producing the output array Test/Model/Reshape_1, is lacking min/max data, which is necessary for quantization. Either target a non-quantized output format, or change the input graph to contain min/max information, or pass --default_ranges_min= and --default_ranges_max= if you do not care about the accuracy of results.
Aborted

问题是什么?我错在哪里?

1 个答案:

答案 0 :(得分:1)

你没有做错任何事。

目前,create_training_graph和create_eval_graph在各种模型架构中并不是最强大的。我们让他们在大多数CNN上工作,但RNN仍在进行中,并提出了一系列不同的挑战。

根据RNN的细节,现在量化的方法将更加复杂,可能需要手动将FakeQuantization操作放在正确的位置。特别是在您的错误消息中,您似乎需要在embedding_lookup中添加FakeQuantization操作。话虽这么说,最终的量化RNN可能会运行,但我不知道准确性将如何。它最终依赖于模型和数据集:)

当自动重写正确支持RNN时,我会更新此答案。