我正在按照此指南操作,以在我的Xamarin.Android应用程序中运行一个简单的tflite检测器:Realtime Mobile Detection
我可以确认本文中的应用程序演示运行正常。但是,如您所见,本文基于针对安全帽的定制培训模型。我只想交换自己的定制训练模型。因此,我去了Google Cloud Platform> Vision网站并创建/训练了一个新模型。完成后,我将其导出为tflite模型。当我这样做时,我只是试图交换模型和标签,而我现在得到了这个异常:
字符串转换错误:输入中出现非法字节序列。
at(包装器管理为本地) System.Runtime.InteropServices.Marshal.PtrToStringAnsi(intptr)在 Emgu.TF.Lite.TfLiteInvoke.TfliteErrorHandler(System.Int32状态, System.IntPtr errMsg)[0x00001] in C:\ Personal \ Ensight \ Ensight.LPR.Mobile \ TF \ CameraTF \ Emgu.TF.Lite.Shared \ TFLiteInvoke.cs:26 在(包装器本机到托管) (包装)的Emgu.TF.Lite.TfLiteInvoke.TfliteErrorHandler(int,intptr) 本地管理) Emgu.TF.Lite.TfLiteInvoke.tfeInterpreterInvoke(intptr)位于 Emgu.TF.Lite.Interpreter.Invoke()[0x00001] in C:\ Personal \ Ensight \ Ensight.LPR.Mobile \ TF \ CameraTF \ Emgu.TF.Lite.Shared \ Interpreter.cs:75 在CameraTF.TensorflowLiteService.Recognize(System.IntPtr颜色, System.Int32 colorsCount)[0x00015]在 C:\ Personal \ Ensight \ Ensight.LPR.Mobile \ TF \ CameraTF \ CameraTF \ AR \ TensorflowLiteService.cs:66 在CameraTF.CameraAccess.CameraAnalyzer.DecodeFrame (ApxLabs.FastAndroidCamera.FastJavaByteArray fastArray)[0x00175]在 C:\ Personal \ Ensight \ Ensight.LPR.Mobile \ TF \ CameraTF \ CameraTF \ Camera \ CameraAnalyzer.cs:156 在 CameraTF.CameraAccess.CameraAnalyzer + <> c__DisplayClass25_0.b__0 ()[0x00002] in C:\ Personal \ Ensight \ Ensight.LPR.Mobile \ TF \ CameraTF \ CameraTF \ Camera \ CameraAnalyzer.cs:114
此异常发生在我调用此方法调用的确切时间:
public void Recognize(IntPtr colors, int colorsCount)
{
CopyColorsToTensor(colors, colorsCount, inputTensor.DataPointer);
interpreter.Invoke();
var detectionBoxes = (float[])outputTensors[0].GetData();
var detectionClasses = (float[])outputTensors[1].GetData();
var detectionScores = (float[])outputTensors[2].GetData();
var detectionNumDetections = (float[])outputTensors[3].GetData();
var numDetections = (int)detectionNumDetections[0];
Stats.NumDetections = numDetections;
Stats.Labels = detectionClasses;
Stats.Scores = detectionScores;
Stats.BoundingBoxes = detectionBoxes;
}
这是在尝试“识别”张量分配和输入/输出连接之前完成的初始化。我确认此初始化工作正常,并且确实看到了输入和输出层:
public bool Initialize(Stream modelData, bool useNumThreads, bool useNNApi)
{
using (var ms = new MemoryStream())
{
modelData.CopyTo(ms);
model = new FlatBufferModel(ms.ToArray());
}
if (!model.CheckModelIdentifier())
{
return false;
}
var op = new BuildinOpResolver();
interpreter = new Interpreter(model, op);
interpreter.UseNNAPI(useNNApi);
if (useNumThreads)
{
interpreter.SetNumThreads(Environment.ProcessorCount);
}
var allocateTensorStatus = interpreter.AllocateTensors();
if (allocateTensorStatus == Status.Error)
{
return false;
}
//var input = interpreter.GetInput();
//inputTensor = interpreter.GetTensor(input[0]);
if (inputTensor == null)
{
inputTensor = interpreter.Inputs[0];
}
if (outputTensors == null)
{
outputTensors = interpreter.Outputs;
}
//var output = interpreter.GetOutput();
//var outputIndex = output[0];
//outputTensors = new Tensor[output.Length];
//for (var i = 0; i < output.Length; i++)
//{
// outputTensors[i] = interpreter.GetTensor(outputIndex + i);
//}
return true;
}
同样,我也确保其余代码(如图像整形等)都没有问题,因为我能够交换模型并将原始模型放回原处,并且所有代码都能正常工作。这与该特定模型有关,但我在Google Cloud上进行了培训,并导出为tflite。我确实确保检查了输入形状/大小的差异,这不是问题。我不太确定还可能是什么,这些模型看起来很相似。这是我从Google Cloud获得的导出元数据,看起来很标准
{
"inferenceType": "QUANTIZED_UINT8",
"inputShape": [
1,
320,
320,
3
],
"inputTensor": "normalized_input_image_tensor",
"maxDetections": 40,
"outputTensorRepresentation": [
"bounding_boxes",
"class_labels",
"class_confidences",
"num_of_boxes"
],
"outputTensors": [
"TFLite_Detection_PostProcess",
"TFLite_Detection_PostProcess:1",
"TFLite_Detection_PostProcess:2",
"TFLite_Detection_PostProcess:3"
]
}
有什么想法或尝试的方法吗?谢谢!