如何在C ++中的ArmNN for Linux中加载基于onnx模型

时间:2019-08-23 06:33:19

标签: c++ c linux arm onnx

我试图基于ArmNN创建可在ONNX模型上运行的C ++独立应用程序。首先,我下载了一些标准模型进行测试,在尝试加载模型时,我看到崩溃提示“ Tensor numDimensions必须大于0”。

奇怪的是,我调用的加载模型的函数仅接受一个参数,即模型名称。我没有地方指定尺寸和其他内容。我可能在这里做错了吗?还是这不是加载模型的方法?

我在here中详细介绍了编译过的ARMnn,其中包含对ONNX的支持。 build和include文件夹已复制到我正在尝试运行代码的ARM linux计算机上。我正在使用Makefile进行编译和运行。

我当前使用的模型是从here下载的。

最初,我在ArmNN master分支上,在搜索此错误消息时,我遇到了ArmNN发行说明,其中提到非常相同的错误已在19.05版中修复。因此,我切换到标记v19.05并从头开始重新构建所有内容,并尝试再次运行该应用程序,但相同的错误不断弹出。

这是C ++代码-

#include "armnn/ArmNN.hpp"
#include "armnn/Exceptions.hpp"
#include "armnn/Tensor.hpp"
#include "armnn/INetwork.hpp"
#include "armnnOnnxParser/IOnnxParser.hpp"


int main(int argc, char** argv)
{
    armnnOnnxParser::IOnnxParserPtr parser = armnnOnnxParser::IOnnxParser::Create();
    std::cout << "\nmodel load start";
    armnn::INetworkPtr network = parser->CreateNetworkFromBinaryFile("model.onnx");
    std::cout << "\nmodel load end";

    std::cout << "\nmain end";
    return 0;
}

Makefile看起来像这样-

ARMNN_LIB = /home/root/Rahul/armnn_onnx/build
ARMNN_INC = /home/root/Rahul/armnn_onnx/include

all: onnx_test

onnx_test: onnx_test.cpp 
        g++ -O3 -std=c++14 -I$(ARMNN_INC) onnx_test.cpp -I.-I/usr/include -L/usr/lib -lopencv_core -lopencv_imgcodecs -lopencv_highgui  -o onnx_test -L$(ARMNN_LIB) -larmnn -lpthread -larmnnOnnxParser

clean:
        -rm -f onnx_test

test: onnx_test
        LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$(ARMNN_LIB) ./onnx_test

预期输出- 代码应按预期加载模型并执行干净退出。

实际错误消息-

terminate called after throwing an instance of 'armnn::InvalidArgumentException'
  what():  Tensor numDimensions must be greater than 0
model load startAborted (core dumped)

下面提供了gdb回溯-

(gdb) r
Starting program: /home/root/Rahul/sample_onnx/onnx_test 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/libthread_db.so.1".

terminate called after throwing an instance of 'armnn::InvalidArgumentException'
  what():  Tensor numDimensions must be greater than 0
model load start
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig@entry=6) at /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
51  }
(gdb) bt
#0  __GI_raise (sig=sig@entry=6) at /usr/src/debug/glibc/2.26-r0/git/sysdeps/unix/sysv/linux/raise.c:51
#1  0x0000ffffbe97ff00 in __GI_abort () at /usr/src/debug/glibc/2.26-r0/git/stdlib/abort.c:90
#2  0x0000ffffbec0c0f8 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libstdc++.so.6
#3  0x0000ffffbec09afc in ?? () from /usr/lib/libstdc++.so.6
#4  0x0000ffffbec09b50 in std::terminate() () from /usr/lib/libstdc++.so.6
#5  0x0000ffffbec09e20 in __cxa_throw () from /usr/lib/libstdc++.so.6
#6  0x0000ffffbefdad84 in armnn::TensorShape::TensorShape(unsigned int, unsigned int const*) () from /home/root/Rahul/armnn_onnx/build/libarmnn.so
#7  0x0000ffffbed454d8 in armnnOnnxParser::(anonymous namespace)::ToTensorInfo(onnx::ValueInfoProto const&) [clone .constprop.493] () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#8  0x0000ffffbed46080 in armnnOnnxParser::OnnxParser::SetupInfo(google::protobuf::RepeatedPtrField<onnx::ValueInfoProto> const*) () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#9  0x0000ffffbed461ac in armnnOnnxParser::OnnxParser::LoadGraph() () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#10 0x0000ffffbed46760 in armnnOnnxParser::OnnxParser::CreateNetworkFromModel(onnx::ModelProto&) () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#11 0x0000ffffbed469b0 in armnnOnnxParser::OnnxParser::CreateNetworkFromBinaryFile(char const*) () from /home/root/Rahul/armnn_onnx/build/libarmnnOnnxParser.so
#12 0x0000000000400a48 in main ()

2 个答案:

答案 0 :(得分:0)

看起来ONNX中的标量表示为没有尺寸的张量。因此,这里的问题是armnnOnnxParser无法正确处理ONNX标量。我建议在armnn Github上提出一个问题。

答案 1 :(得分:0)

我认为您应该尝试至少一个输入层和输出层。

// Helper function to make input tensors
armnn::InputTensors MakeInputTensors(const std::pair<armnn::LayerBindingId,
    armnn::TensorInfo>& input,
    const void* inputTensorData)
{
 return { { input.first, armnn::ConstTensor(input.second, inputTensorData) } };
}

供参考,请访问:https://developer.arm.com/solutions/machine-learning-on-arm/developer-material/how-to-guides/configuring-the-arm-nn-sdk-build-environment-for-onnx