从C ++调用Tensorflow Lite .tflite CNN模型时的非法指令

时间:2019-05-06 20:50:25

标签: c++ tensorflow tensorflow-lite

在使用以下代码行调用Tensorflow Lite .tflite模型时,我得到了Illegal Instruction

平台是在BeagleBone Black上运行的Raspian Stretch。

if (interpreter->Invoke() != kTfLiteOk) {
      std::cout << "Failed to invoke tflite!\n";
    }

我已经成功地使用相同的代码来使用转换后的纯ANN模型。但是,在使用CNN类型模型时,我遇到了这个问题。

附带的是gdb backtrace()。

我还试图调用其他几个Tensorflow TFLITE托管模型:mobilenet和squeezenet,我也遇到了同样的问题。转换后的模型的结构也显示在回溯上方。

回溯是:

input(0) name: images
0: ArgMax, 8, 4, 0, 0
1: ArgMax/dimension, 4, 2, 0, 0
2: ConvNet/Reshape, 45120, 1, 0, 0
3: ConvNet/Reshape/shape, 16, 2, 0, 0
4: ConvNet/conv2d/Conv2D_bias, 64, 1, 0, 0
5: ConvNet/conv2d/Relu, 674880, 1, 0, 0
6: ConvNet/conv2d/kernel, 1024, 1, 0, 0
7: ConvNet/conv2d_1/Conv2D_bias, 128, 1, 0, 0
8: ConvNet/conv2d_1/Relu, 299520, 1, 0, 0
9: ConvNet/conv2d_1/kernel, 18432, 1, 0, 0
10: ConvNet/dense/BiasAdd, 1024, 1, 0, 0
11: ConvNet/dense/MatMul_bias, 1024, 1, 0, 0
12: ConvNet/dense/kernel/transpose, 19169280, 1, 0, 0
13: ConvNet/dense_1/BiasAdd, 8, 1, 0, 0
14: ConvNet/dense_1/MatMul_bias, 8, 1, 0, 0
15: ConvNet/dense_1/kernel/transpose, 2048, 1, 0, 0
16: ConvNet/max_pooling2d/MaxPool, 164864, 1, 0, 0
17: ConvNet/max_pooling2d_1/MaxPool, 74880, 1, 0, 0
18: images, 45120, 1, 0, 0
input: 18
About to memcpy
About to invoke mod!

Thread 1 "minimal" received signal SIGILL, Illegal instruction.
0x0007de64 in EigenForTFLite::TensorCostModel<EigenForTFLite::Threanst&, int) ()
(gdb) bt
#0  0x0007de64 in EigenForTFLite::TensorCostModel<EigenForTFLite::Tt const&, int) ()
#1  0x000901aa in void EigenForTFLite::TensorEvaluator<EigenForTFLi>, 1u> const, EigenForTFLite::TensorReshapingOp<EigenForTFLite::DSigenForTFLite::TensorMap<EigenForTFLite::Tensor<float const, 4, 1, iForTFLite::TensorReshapingOp<EigenForTFLite::DSizes<int, 2> const,  1, int>, 16, EigenForTFLite::MakePointer> const> const, EigenForTFevice>::evalProduct<0>(float*) const ()
#2  0x00090bae in tflite::multithreaded_ops::EigenTensorConvFunctorat const*, float*, int, int, int, int, float const*, int, int, int,
#3  0x00091200 in void tflite::ops::builtin::conv::EvalFloat<(tflit, TfLiteConvParams*, tflite::ops::builtin::conv::OpData*, TfLiteTen, TfLiteTensor*) ()
#4  0x0009134e in TfLiteStatus tflite::ops::builtin::conv::Eval<(tfde*) ()
#5  0x00047c2e in tflite::Subgraph::Invoke() ()
#6  0x00013b70 in tflite::Interpreter::Invoke() ()
#7  0x00012fc4 in main ()
(gdb)

最初我以为我包括了某种类型的Tensorflow操作,而Tensorflow lite不支持这种操作,但是现在由于其他模型似乎都没有调用,我不确定。

Tensorflow Git标签/版本为1.13.1。

使用以下命令从源代码树编译演示:

CC_PREFIX=arm-linux-gnueabihf- make -j 3 -f -g tensorflow/lite/tools/make/Makefile TARGET=rpi TARGET_ARCH=armv7l minimal

其中最小的是在其中创建的新的makefile目标

/tensorflow/tensorflow/lite/tools/make/Makefile

更多的代码是从最小和label_image tflite演示的修改而成的:

std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile(filename);
TFLITE_MINIMAL_CHECK(model != nullptr);

// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
InterpreterBuilder builder(*model, resolver);
std::unique_ptr<Interpreter> interpreter;
builder(&interpreter);
TFLITE_MINIMAL_CHECK(interpreter != nullptr);

// Allocate tensor buffers.
TFLITE_MINIMAL_CHECK(interpreter->AllocateTensors() == kTfLiteOk);
printf("=== Pre-invoke Interpreter State ===\n");
tflite::PrintInterpreterState(interpreter.get());

  int input = interpreter->inputs()[0];
  LOG(INFO) << "input: " << input << "\n";

std::cout << "About to memcpy\n";

float* input_ptr = interpreter->typed_tensor<float>(input);
memcpy(input_ptr,float_buf,tf_input_size*sizeof(float));

if (interpreter->Invoke() != kTfLiteOk) {
      std::cout << "Failed to invoke tflite!\n";
    }

赞赏任何指示。

:: EDIT ::

哇。在树莓派上运行完全相同的可执行文件和.tflite w /数据,效果100%。

1 个答案:

答案 0 :(得分:0)

我正在为错误的FPU进行编译,该FPU仅在调用CNN而不是ANN时显示。

将rpi_makefile.inc -mfpu更改为目标霓虹灯(v3)

-mfpu=neon \

Tflite似乎可以更好地处理beaglebone黑色。