op尚不支持量化:tensorflow 2.x的'DEQUANTIZE'

时间:2020-08-28 06:22:58

标签: tensorflow2.0 tensorflow-lite quantization quantization-aware-training

我正在由keras对resnet模型进行QAT,在转换为tflite全整数模型时遇到了这个问题。我已经尝试过每晚tf的最新版本,但是它不能解决问题。 我在QAT期间使用量化注释模型进行批次归一化量化

The annotate model

这是我用来转换模型的代码:

converter = tf.lite.TFLiteConverter.from_keras_model(layer)
def representative_dataset_gen():
    for _ in range(50):
        batch = next(train_generator)
        img = np.expand_dims(batch[0][0],0).astype(np.float32)
        yield [img]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS_INT8
]
converter.experimental_new_converter = True

# converter.target_spec.supported_types = [tf.int8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
quantized_tflite_model = converter.convert()
with open("test_try_v2.tflite", 'wb') as f:
    f.write(quantized_tflite_model)

如果我通过在{target_spec.supported_ops“中添加tf.lite.OpsSet.TFLITE_BUILTINS来绕过此错误,那么我在edge_tpu编译器上仍然遇到DEQUANTIZE问题

ERROR: :61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.
ERROR: Node number 3 (DEQUANTIZE) failed to prepare.

ERROR: :61 op_context.input->type == kTfLiteUInt8 || op_context.input->type == kTfLiteInt8 || op_context.input->type == kTfLiteInt16 || op_context.input->type == kTfLiteFloat16 was not true.
ERROR: Node number 3 (DEQUANTIZE) failed to prepare.

1 个答案:

答案 0 :(得分:0)

原因是tf2.4之前的tf尚不支持DEQUANTIZE的全8位整数推断。 因此,解决方案是回到tf.1x或改用tf2.4