我想使用MaskRCNN实现自定义图像分类器。
为了提高网络速度,我想优化推理。
我已经使用过OpenCV DNN库,但是我想在OpenVINO方面向前迈进一步。
我成功地使用OpenVINO模型优化器(python)构建了代表我的网络的.xml和.bin文件。
我使用Visual Studio 2017成功构建了OpenVINO Sample目录并运行MaskRCNNDemo项目。
mask_rcnn_demo.exe -m .\Release\frozen_inference_graph.xml -i .\Release\input.jpg
InferenceEngine:
API version ............ 1.4
Build .................. 19154
[ INFO ] Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] .\Release\input.jpg
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
[ INFO ] Loading network files
[ INFO ] Preparing input blobs
[ WARNING ] Image is resized from (4288, 2848) to (800, 800)
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ INFO ] Start inference (1 iterations)
Average running time of one iteration: 2593.81 ms
[ INFO ] Processing output blobs
[ INFO ] Detected class 16 with probability 0.986519: [2043.3, 1104.9], [2412.87, 1436.52]
[ INFO ] Image out.png created!
[ INFO ] Execution successful
然后我尝试在一个单独的项目中重现此项目... 首先,我必须注意依赖关系...
<MaskRCNNDemo>
//References
<format_reader/> => Open CV Images, resize it and get uchar data
<ie_cpu_extension/> => CPU extension for un-managed layers (?)
//Linker
format_reader.lib => Format Reader Lib (VINO Samples Compiled)
cpu_extension.lib => CPU extension Lib (VINO Samples Compiled)
inference_engined.lib => Inference Engine lib (VINO)
opencv_world401d.lib => OpenCV Lib
libiomp5md.lib => Dependancy
... (other libs)
使用它,我用自己的类和打开图像(多帧tiff)的方式构建了一个新项目。 这项工作没有问题,那么我将不作介绍(我与CV DNN推理引擎一起使用没有问题)。
我想构建与MaskRCNNDemo相同的项目:CustomIA
<CustomIA>
//References
None => I use my own libtiff way to open image and i resize with OpenCV
None => I will just add include to cpu_extension source code.
//Linker
opencv_world345d.lib => OpenCV 3.4.5 library
tiffd.lib => Libtiff Library
cpu_extension.lib => CPU extension compiled with sample
inference_engined.lib => Inference engine lib.
我在项目目标目录中添加了以下dll:
cpu_extension.dll
inference_engined.dll
libiomp5md.dll
mkl_tiny_omp.dll
MKLDNNPlugind.dll
opencv_world345d.dll
tiffd.dll
tiffxxd.dll
我成功地编译并执行了,但是遇到两个问题:
旧代码:
slog::info << "Loading plugin" << slog::endl;
InferencePlugin plugin = PluginDispatcher({ FLAGS_pp, "../../../lib/intel64" , "" }).getPluginByDevice(FLAGS_d);
/** Loading default extensions **/
if (FLAGS_d.find("CPU") != std::string::npos) {
/**
* cpu_extensions library is compiled from "extension" folder containing
* custom MKLDNNPlugin layer implementations. These layers are not supported
* by mkldnn, but they can be useful for inferring custom topologies.
**/
plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
}
/** Printing plugin version **/
printPluginVersion(plugin, std::cout);
输出:
[ INFO ] Loading plugin
API version ............ 1.5
Build .................. win_20181005
Description ....... MKLDNNPlugin
新代码:
VINOEngine::VINOEngine()
{
// Loading Plugin
std::cout << std::endl;
std::cout << "[INFO] - Loading VINO Plugin..." << std::endl;
this->plugin= PluginDispatcher({ "", "../../../lib/intel64" , "" }).getPluginByDevice("CPU");
this->plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
printPluginVersion(this->plugin, std::cout);
输出:
[INFO] - Loading VINO Plugin...
000001A242280A18 // Like memory adress ???
第二期:
当我尝试从新代码中提取投资回报率和蒙版时,如果我有一个“匹配项”,我总是会:
但是面具看起来很好提取...
新代码:
float score = box_info[2];
if (score > this->Conf_Threshold)
{
// On reconstruit les coordonnées de la box..
float x1 = std::min(std::max(0.0f, box_info[3] * Image.cols), static_cast<float>(Image.cols));
float y1 = std::min(std::max(0.0f, box_info[4] * Image.rows), static_cast<float>(Image.rows));
float x2 = std::min(std::max(0.0f, box_info[5] * Image.cols), static_cast<float>(Image.cols));
float y2 = std::min(std::max(0.0f, box_info[6] * Image.rows), static_cast<float>(Image.rows));
int box_width = std::min(static_cast<int>(std::max(0.0f, x2 - x1)), Image.cols);
int box_height = std::min(static_cast<int>(std::max(0.0f, y2 - y1)), Image.rows);
Image is resized from (4288, 2848) to (800, 800)
Detected class 62 with probability 1: [4288, 0], [4288, 0]
然后,当我没有正确的bbox坐标时,我不可能将遮罩放置在图像中并调整其大小...
有人对我做得不好有想法吗?
如何使用cpu_extension创建和正确链接OpenVINO项目?
谢谢!
答案 0 :(得分:0)
版本的第一个问题:在printPluginVersion函数上方,您将看到InferenceEngine和插件版本信息的重载std :: ostream运算符。
第二:您可以尝试通过比较原始卷积和原始层和OV的输出层后的输出来调试模型。确保每个元素都相等。
在OV中,您可以使用network.addOutput(“ layer_name”)将任何图层添加到输出中。然后使用以下命令读取输出:const Blob :: Ptr debug_blob = infer_request.GetBlob(“ layer_name”)。
在大多数情况下,我会发现缺少输入预处理(均值,规范化等)
cpu_extensions是一个动态库,但您仍然可以更改cmake脚本以使其静态并与应用程序链接。之后,您将需要使用应用程序路径来调用IExtensionPtr extension_ptr = make_so_pointer(argv [0])