我试图让Mobilenetv2模型(将最后一层重新训练到我的数据中)运行在Google边缘TPU珊瑚上。
我遵循这个教程https://www.tensorflow.org/lite/performance/post_training_quantization?hl=en进行训练后量化。相关代码为:
...
train = tf.convert_to_tensor(np.array(train, dtype='float32'))
my_ds = tf.data.Dataset.from_tensor_slices(train).batch(1)
# POST TRAINING QUANTIZATION
def representative_dataset_gen():
for input_value in my_ds.take(30):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_quant_model = converter.convert()
我已经成功生成了tflite量化模型,但是当我运行edgetpu_compiler时(此页面https://coral.withgoogle.com/docs/edgetpu/compiler/#usage之后),我得到了以下输出:
edgetpu_compiler Notebooks/MobileNetv2_3class_visit_split_best-val-
acc.h5.quant.tflite
Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
Invalid model: Notebooks/MobileNetv2_3class_visit_split_best-val-
acc.h5.quant.tflite
Model could not be parsed
我对量化过程还很陌生,所以可能缺少一些简单的东西。有什么想法吗?
模型的输入形状是3通道rgb图像。是否需要仅1通道图像作为输入?可以对3个通道的图像进行全整数量化吗?
答案 0 :(得分:2)
我有相同的问题和相同的错误消息。我使用tensorflow.keras.applications mobilenetv2重新训练了MobilenetV2。我发现我的模型与Coral的示例模型(https://coral.withgoogle.com/models/)之间的TFLite张量存在较大差异。
首先,输入和输出的类型不同。当我将tf.keras模型转换为tflite时,它包含浮点类型的输入和输出张量,而示例模型具有整数类型。如果我使用tensorflow-lite(https://www.tensorflow.org/lite/convert/)的命令行转换和python转换,则这是不同的。 命令行转换输出整数类型io,而python转换输出浮点类型io。 (这真的很奇怪。)
第二,示例模型中没有批处理标准化(BN)层,但是Keras MobilenetV2中有一些BN。我认为'ERROR:quantized_dimension'的数量必须在[0,1)范围内。是3。与BN的数量有关,因为Keras模型中有17个BN层。
我仍在努力解决这个问题。我将按照Coral的再培训示例进行解决。 (https://coral.withgoogle.com/docs/edgetpu/retrain-detection/)
答案 1 :(得分:1)
此问题已在tensorflow1.15-rc中修复。在新的tf版本中将模型转换为TFLite。然后TFLite模型将在TPU编译器中工作。
并放置这些行,使TFlite模型的输入和输出成为uint8类型。 (我认为应该是tf.int8。)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
检查下面的链接。 https://www.tensorflow.org/lite/performance/post_training_quantization
答案 2 :(得分:1)
我有类似的错误,使用tf-nightly build 1.15和使用.tflite文件进行后期训练全整数量化,然后使用应该可以工作的edge TPU编译器进行编译。这种方法解决了我的错误。
在github中提出了相同的问题,您可以看到-here
答案 3 :(得分:0)
更新到最新的编译器版本后,您还会遇到此问题吗?
Edge TPU Compiler version 2.0.267685300