继续我的计算机视觉的东西,我到了一个点,我在N台摄像机中为一个补丁计算描述符。 问题是当我对描述符进行计算时,OpenCV中的函数是
descriptor.compute(image, vecKeypoints, matDescriptors);
其中 vecKeypoints 是cv::KeyPoints
的向量, matDescriptors 是cv::Mat
,根据OpenCV的文档,它会填充计算的描述符
由于我有N台摄像机,我为每台摄像机计算了几个描述符,所以我每个N摄像机都存储了K个描述符。因此,我创建了一个描述符向量(即矩阵)
std::vector<cv::Mat> descriptors;
在每次迭代中,我计算一个新的 matDescriptors 并将其推送到向量descriptors
。我看到的问题是每个 matDescriptors 存储数据的地址对于向量中的每个元素都是相同的描述符
据我所知,当我做vector.push_back(arg)
arg的副本并存储在向量中时,为什么我有相同的地址? &(descriptors[0].data)
不应与&(descriptors[1].data)
不同吗?
下面是代码的一般视图
std::vector<Pixel> patchPos;
std::vector<Pixel> disparityPatches;
//cv::Ptr<cv::DescriptorExtractor> descriptor = cv::DescriptorExtractor::create("ORB");
cv::ORB descriptor(0, 1.2f, 8, 0);
std::vector<cv::Mat> camsDescriptors;
std::vector<cv::Mat> refsDescriptors;
uint iPatchV = 0;
uint iPatchH = 0;
// FOR EACH BLOCK OF PATCHES (there are 'blockSize' patches in one block)
for (uint iBlock = 0; iBlock < nBlocks; iBlock++)
{
// FOR EACH PATCH IN THE BLOCK
for(uint iPatch = iBlock*blockSize; iPatch < (iBlock*blockSize)+blockSize; iPatch++)
{
// GET THE POSITION OF THE upper-left CORNER(row, col) AND
// STORE THE COORDINATES OF THE PIXELS INSIDE THE PATCH
for (uint pRow = (iPatch*patchStep)/camRef->getWidth(), pdRow = 0; pRow < iPatchV+patchSize; pRow++, pdRow++)
{
for (uint pCol = (iPatch*patchStep)%camRef->getWidth(), pdCol = 0; pCol < iPatchH+patchSize; pCol++, pdCol++)
{
patchPos.push_back(Pixel(pCol, pRow));
}
}
// KEYPOINT TO GET THE DESCRIPTOR OF THE CURRENT PATCH IN THE REFERENCE CAMERA
std::vector<cv::KeyPoint> refPatchKeyPoint;
// patchCenter*patchSize+patchCenter IS the index of the center pixel after 'linearizing' the patch
refPatchKeyPoint.push_back(cv::KeyPoint(patchPos[patchCenter*patchSize+patchCenter].getX(),
patchPos[patchCenter*patchSize+patchCenter].getY(), patchSize));
// COMPUTE THE DESCRIPTOR OF THE PREVIOUS KEYPOINT
cv::Mat d;
descriptor.compute(Image(camRef->getHeight(), camRef->getWidth(), CV_8U, (uchar*)camRef->getData()),
refPatchKeyPoint, d);
refsDescriptors.push_back(d); // This is OK, address X has data of 'd'
//FOR EVERY OTHER CAMERA
for (uint iCam = 0; iCam < nTotalCams-1; iCam++)
{
//FOR EVERY DISPARITY LEVEL
for (uint iDispLvl = 0; iDispLvl < disparityLevels; iDispLvl++)
{
...
...
//COMPUTE THE DISPARITY FOR EACH OF THE PIXEL COORDINATES IN THE PATCH
for (uint iPatchPos = 0; iPatchPos < patchPos.size(); iPatchPos++)
{
disparityPatches.push_back(Pixel(patchPos[iPatchPos].getX()+dispNodeX, patchPos[iPatchPos].getY()+dispNodeY));
}
}
// KEYPOINTS TO GET THE DESCRIPTORS OF THE 50.DISPAIRED-PATCHES IN CURRENT CAMERA
...
...
descriptor.compute(Image(camList[iCam]->getHeight(), camList[iCam]->getWidth(), CV_8U, (uchar*)camList[iCam]->getData()),
camPatchKeyPoints, d);
// First time this executes is OK, address is different from the previous 'd'
// Second time, the address is the same as the previously pushed 'd'
camsDescriptors.push_back(d);
disparityPatches.clear();
camPatchKeyPoints.clear();
}
}
}
答案 0 :(得分:2)
Mat是像素的某种智能指针,因此Mat a = b将具有a和b的共享像素。类似的情况为push_back()
如果您需要“深层复制”,请使用Mat :: clone()
答案 1 :(得分:0)
在每个循环中,确保在附加到向量之前调用函数cv :: Mat :: release()。
答案 2 :(得分:0)
cv :: Mat隐式共享数据,因此每当您使用代码中的push_back使用的赋值运算符或复制构造函数复制数据时,数据都不会被复制,而是与新对象共享。对一个数据的任何更改都将反映在另一个中。事实上,正如您所观察到的,指针是相同的。
在这种情况下,您只需要为每次迭代创建一个新的cv :: Mat:
for (...) {
cv::Mat d;
descriptor.compute(..., d);
camsDescriptors.push_back(d);
}
我正在寻找的答案是,如果你想预先分配矩阵,这是没有副本,临时或意外分享的方法:
std::vector v;
v.reserve(N);
for (size_t i = 0; i < N; i++) {
// Uninitialized Mat of specified size, header constructed in place
v.emplace_back(height, width, CV_8UC1);
}
将此与两种有效但但巧妙错误的方法相比较,当与非共享数据结构一起使用时,它看起来像普通的C ++代码:
std::vector v;
v.resize(N, cv::Mat(height, width, CV_8UC1));
或
std::vector v;
v.reserve(N);
cv::Mat temp(height, width, CV_8UC1);
for (size_t i = 0; i < N; i++) {
// Uninitialized Mat of specified size, header copied from temp
v.push_back(temp);
}
在这两种情况下,数据将在矢量的所有元素之间的幕后共享,这在创建矢量时根本不是我们想要的!