TFLIte无法设置张量:模型转换时尺寸不匹配

时间:2019-11-09 02:17:04

标签: python tensorflow keras iot tf-lite

我有一个构造如下的keras模型

module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(len(classes), activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')

然后我按如下方式从TF中心加载Caltech101数据集

samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']
def normalize(row):
    image, label = row['image'], row['label']
    image = tf.dtypes.cast(image, tf.float32)
    image = tf.image.resize(image, (224, 224))
    image = image / 255.0
    return image, label
train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)

现在,我准备按如下所示训练并保存我的模型:

model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = './output'
tf.saved_model.save(model, saved_model_dir)

这时该模型是可用的,我可以评估形状的输入(224、224、3)。我尝试按以下方式转换此模型:

def generator2():
  data = train_samples
  for _ in range(num_calibration_steps):
    images = []
    for image, _ in data.map(normalize).take(1):
      images.append(image)
    yield images

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

converter.representative_dataset = tf.lite.RepresentativeDataset(generator2)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()

转换触发以下错误

/usr/local/lib/python3.6/dist-packages/tensorflow_core/lite/python/optimize/tensorflow_lite_wrap_calibration_wrapper.py in FeedTensor(self, input_value)
    110 
    111     def FeedTensor(self, input_value):
--> 112         return _tensorflow_lite_wrap_calibration_wrapper.CalibrationWrapper_FeedTensor(self, input_value)
    113 
    114     def QuantizeModel(self, input_py_type, output_py_type, allow_float):

ValueError: Cannot set tensor: Dimension mismatch

现在有一个类似的question,但是在这种情况下,它们正在加载已经转换的模型,这与我尝试转换模型时发生问题的情况不同。

转换器对象是使用SWIG从C ++代码自动生成的类,这使得检查起来很困难。如何找到转换器对象期望的确切尺寸?

2 个答案:

答案 0 :(得分:2)

使用时遇到相同的问题

def representative_dataset_gen():
    for _ in range(num_calibration_steps):
        # Get sample input data as a numpy array in a method of your choosing.
        yield [input]

来自https://www.tensorflow.org/lite/performance/post_training_quantization。 似乎converter.representative_dataset需要一个包含形状为(1, input_shape)的示例的列表。也就是说,沿途使用一些东西

def representative_dataset_gen():
    for i in range(num_calibration_steps):
        # Get sample input data as a numpy array in a method of your choosing.
        yield [input[i:i+1]]

如果输入的形状为(num_samples, input_shape),则解决此问题。在您的情况下,使用tf数据集时,一个有效的示例将是

import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds

samples, info = tfds.load("caltech101", with_info=True)
train_samples, test_samples = samples['train'], samples['test']

def normalize(row):
    image, label = row['image'], row['label']
    image = tf.dtypes.cast(image, tf.float32)
    image = tf.image.resize(image, (224, 224))
    image = image / 255.0
    return image, label

train_data = train_samples.repeat().shuffle(1024).map(normalize).batch(32).prefetch(1)
test_data = test_samples.map(normalize).batch(1)

module_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"
backbone = hub.KerasLayer(module_url)
backbone.build([None, 224, 224, 3])
model = tf.keras.Sequential([backbone, tf.keras.layers.Dense(102, activation='softmax')])
model.build([None, 224, 224, 3])
model.compile('adam', loss='sparse_categorical_crossentropy')

model.fit_generator(train_data, epochs=1, steps_per_epoch=100)
saved_model_dir = 'output/'
tf.saved_model.save(model, saved_model_dir)

num_calibration_steps = 50

def generator():
    single_batches = train_samples.repeat(count=1).map(normalize).batch(1) 
    i=0
    while(i<num_calibration_steps):
        for batch in single_batches:
            i+=1
            yield [batch[0]]

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

converter.representative_dataset = tf.lite.RepresentativeDataset(generator)
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_default_quant_model = converter.convert()

答案 1 :(得分:1)

我遇到了同样的问题,我使用了这个解决方案,将输入测试设置为您的输入测试,它应该也适用于您:

def representative_dataset():
    arrs=np.expand_dims(inputs_test, axis=1).astype(np.float32)
    for data in arrs:
      yield [ data ]
    
    
    
    
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
tflite_quant_model = converter.convert()