从python和android评估tflite模型时的结果不同

时间:2019-08-15 12:42:50

标签: python android tensorflow keras

我已经使用keras(在Google colab中)训练了一个模型,然后通过使用以下代码,使用TFLiteConverter(也在google colab中)将keras h5文件转换为tflite:

import tensorflow as tf

tflite_converter = tf.lite.TFLiteConverter.from_keras_model_file(<KERAS_H5_MODEL_PATH>)

tflite_model = tflite_converter.convert()

with open(<KERAS_TFLITE_DEST_PATH>, 'wb') as tflite_model_file:
    tflite_model_file.write(tflite_model)

之后,我已经运行了tflite模型(也在Google colab中):

import numpy as np
import tensorflow as tf

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path=tflite_model_path)
interpreter.allocate_tensors()

interpreter.set_tensor(input_details[0]['index'], <INPUT_DATA>)

interpreter.invoke()

output = [interpreter.get_tensor(output_details[0]['index'])[0], interpreter.get_tensor(output_details[1]['index'])[0]]

但是,当我在android应用中运行tflite模型(使用相同的输入数据)时,会得到不同的输出:

(此代码基于tensorflow中的this example

AssetFileDescriptor fileDescriptor = assets.openFd(<MODEL_FILENAME>);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
MappedByteBuffer model = fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);

Interpreter.Options interpreterOptions = new Interpreter.Options().setNumThreads(NUM_THREADS).setAllowFp16PrecisionForFp32(Boolean.FALSE);
tfLite = new Interpreter(model, interpreterOptions);

float[][][][] floatValues = new float[1][inputSize][inputSize][3];

// floatValues array is initialized with the same values as in python code above

Object[] inputArray = {floatValues};

output1 = new float[1][8][8][18];
output2 = new float[1][16][16][18];

Map<Integer, Object> outputMap = new HashMap<>();
outputMap.put(0, output1);
outputMap.put(1, output2);

tfLite.runForMultipleInputsOutputs(inputArray, outputMap);

我在android代码中获得的输出值与python代码获得的值无关。这个模型(yolo v3 tiny)用于检测图像中的对象,我认为用python代码获得的值是正确的值

为什么我在android中得到不同的输出?我需要做一些特别的事情才能在android中运行模型吗?从H5到tflite的模型转换是否需要做些不同的事情?

注意:google colab中的tensorflow版本是1.14.0。在android中,我使用此版本的tflite org.tensorflow:tensorflow-lite:1.14.0

已更新

在两种情况下(python和android),我都尝试使用一个输入数组,以确保输入数据相同并且结果有所不同:

 Python results    |  Android results
 0 >   0.06118933  |  0 >   0.061190486
 1 > - 0.50384498  |  1 > - 0.50384396
 2 > - 0.30500048  |  2 > - 0.30500013
 3 > - 0.18725708  |  3 > - 0.18725668
 4 > -12.56872463  |  4 > -12.568727
 5 > - 3.53239870  |  5 > - 3.5323968
 6 >   0.26756036  |  6 >   0.26756087
 7 > - 0.63708508  |  7 > - 0.6370841
 8 > - 0.11959708  |  8 > - 0.119596675
 9 > - 0.42219949  |  9 > - 0.42219883
10 > -12.26699734  | 10 > -12.266998
11 > - 3.42504168  | 11 > - 3.4250417
12 >   0.29714739  | 12 >   0.2971481
13 > - 0.76750284  | 13 > - 0.7675022
14 > - 0.13859260  | 14 > - 0.13859189
15 > - 0.02096577  | 15 > - 0.020965554
16 > -12.78137684  | 16 > -12.781382
17 > - 3.34643984  | 17 > - 3.3464384

0 个答案:

没有答案