在Matlab中计算opencv的透视变换时出错

时间:2013-12-01 15:36:35

标签: matlab opencv image-processing computer-vision perspectivecamera

我正在尝试使用feature matching and homography .Mexopencv将OpenCV视觉工具箱重新编码为mexopencv到Matlab中。

我在Matlab中使用OpenCV工具箱的代码:

function hello

    close all;clear all;

    disp('Feature matching demo, press key when done');

    boxImage = imread('D:/pic/500_1.jpg');

    boxImage = rgb2gray(boxImage);

    [boxPoints,boxFeatures] = cv.ORB(boxImage);

    sceneImage = imread('D:/pic/100_1.jpg');

    sceneImage = rgb2gray(sceneImage);

    [scenePoints,sceneFeatures] = cv.ORB(sceneImage);

    if (isempty(scenePoints)|| isempty(boxPoints)) 
        return;
    end;


    matcher = cv.DescriptorMatcher('BruteForce');
    matches = matcher.match(boxFeatures,sceneFeatures);


    %Box contains pixels coordinates where there are matches
    box = [boxPoints([matches(2:end).queryIdx]).pt];

    %Scene contains pixels coordinates where there are matches
    scene = [scenePoints([matches(2:end).trainIdx]).pt];

    %Please refer to http://stackoverflow.com/questions/4682927/matlab-using-mat2cell

    %Box arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function
    [nRows, nCols] = size(box);
    nSubCols = 2;
    box = mat2cell(box,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Scene arrays contains coordinates the form [ (x1,y1), (x2,y2) ...]
    %after applying mat2cell function

    [nRows, nCols] = size(scene);
    nSubCols = 2;
    scene = mat2cell(scene,nRows,nSubCols.*ones(1,nCols/nSubCols));

    %Finding homography between box and scene
    H = cv.findHomography(box,scene);

    boxCorners = [1, 1;...                           % top-left
        size(boxImage, 2), 1;...                 % top-right
        size(boxImage, 2), size(boxImage, 1);... % bottom-right
        1, size(boxImage, 1)];

  %Fine until this point , problem starts with perspectiveTransform   
  sceneCorners= cv.perspectiveTransform(boxCorners,H); 

end

错误:

    Error using cv.perspectiveTransform
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\core\src\matmul.cpp:1926:
error: (-215) scn + 1 == m.cols && (depth == CV_32F || depth == CV_64F)

..

Error in hello (line 58)
  sceneCorners= cv.perspectiveTransform(boxCorners,H);

问题从检查perspectiveTranform(boxCorners, H)开始,直到找到homography它没问题。另请注意,在计算样本和场景中的匹配坐标时,我从2:endbox = [boxPoints([matches(2:end).queryIdx]).pt]建立了索引,因为访问第一个元素的queryIdx会产生第二个位置无法生成被访问。但是,我认为,这不会是一个问题。无论如何,我期待着我的解决方案的答案。感谢。

PS:这是我原帖的编辑版。我在下面收到的解决方案不够充分,并且该错误不断重复。

第二次更新:

根据@Amro的说法,我已经更新了我的代码,如下所示。内点给出了很好的响应,但计算透视变换的坐标在某种程度上被扭曲了。

function hello
    close all; clear all; clc;

    disp('Feature matching with ORB');

    %Feature detector and extractor for object
    imgObj = imread('D:/pic/box.png');
    %boxImage = rgb2gray(boxImage);
    [keyObj,featObj] = cv.ORB(imgObj);

    %Feature detector and extractor for scene
    imgScene = imread('D:/pic/box_in_scene.png');
    %sceneImage = rgb2gray(sceneImage);
    [keyScene,featScene] = cv.ORB(imgScene);

    if (isempty(keyScene)|| isempty(keyObj)) 
        return;
    end;

    matcher = cv.DescriptorMatcher('BruteForce-HammingLUT');
    m = matcher.match(featObj,featScene);

    %im_matches = cv.drawMatches(boxImage, boxPoints, sceneImage, scenePoints,m);

    % extract keypoints from the filtered matches
    % (C zero-based vs. MATLAB one-based indexing)
    ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
    ptsObj = num2cell(ptsObj, 2);
    ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
    ptsScene = num2cell(ptsScene, 2);

    % compute homography
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

    % remove outliers reported by RANSAC
    inliers = logical(inliers);
    m = m(inliers);

    % show the final matches
    imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
    imshow(imgMatches);

    % apply the homography to the corner points of the box
    [h,w] = size(imgObj);
    corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
    p = cv.perspectiveTransform(corners, H)
    p = permute(p, [2 3 1])
    p = bsxfun(@plus, p, [size(imgObj,2) 0]);

    % draw lines between the transformed corners (the mapped object)
    opts = {'Color',[0 255 0], 'Thickness',4};
    imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
    imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
    imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
    imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
    imshow(imgMatches)
    title('Matches & Object detection')

end

输出正常,但perspectiveTranform没有给出正确的坐标以解决问题。 到目前为止我的输出:

Output

第3次更新:

我已经运行了所有代码并且使用了单应性。然而,一个角落的案子真的很难惹我。 如果我执行imgObj = imread('D:/pic/box.png')imgScene = imread('D:/pic/box_in_scene.png'),我会得到单应性矩形,但是,当我执行imgScene = imread('D:/pic/box.png')时,即对象和场景是相同 ,我收到这个错误 -

Error using cv.findHomography
Unexpected Standard exception from MEX file.
What()
is:C:\slave\builds\WinInstallerMegaPack\src\opencv\modules\calib3d\src\fundam.cpp:1074:
error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() ==
points2.type()

..

Error in hello (line 37)
    [H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

现在,我在过去遇到过这个错误,当ptsObjptsScene的数量很少时会发生这种情况,例如,当场景只有白/黑屏幕时,关键点那个场景是零。在这个特定问题中,有足够数量的ptsObjptsScene。问题在哪里呢?我使用SURF测试了此代码,同样的错误正在重新铺设。

3 个答案:

答案 0 :(得分:4)

几句话:

  • 匹配器返回从零开始的索引(以及由于在C​​ ++中实现OpenCV而导致的各种其他功能)。因此,如果您想获得相应的关键点,您必须调整一个(MATLAB数组是基于一个)。 mexopencv intentionally不会自动对此进行调整。

  • cv.findHomography MEX函数接受点作为大小为1xNx2的数字数组(例如:cat(3, [x1,x2,...], [y1,y2,...]))或作为N大小的单元格数组每个两元素向量(即{[x1,y1], [x2,y2], ...})。在这种情况下,我不确定你的代码是否正确地包装了这些点,无论哪种方式都可以使它变得更简单..

这是从C ++翻译成MATLAB的完整demo

% input images
imgObj = imread('box.png');
imgScene = imread('box_in_scene.png');

% detect keypoints and calculate descriptors using SURF
detector = cv.FeatureDetector('SURF');
keyObj = detector.detect(imgObj);
keyScene = detector.detect(imgScene);

extractor = cv.DescriptorExtractor('SURF');
featObj = extractor.compute(imgObj, keyObj);
featScene = extractor.compute(imgScene, keyScene);

% match descriptors using FLANN
matcher = cv.DescriptorMatcher('FlannBased');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

% extract keypoints from the filtered matches
% (C zero-based vs. MATLAB one-based indexing)
ptsObj = cat(1, keyObj([m.queryIdx]+1).pt);
ptsObj = num2cell(ptsObj, 2);
ptsScene = cat(1, keyScene([m.trainIdx]+1).pt);
ptsScene = num2cell(ptsScene, 2);

% compute homography
[H,inliers] = cv.findHomography(ptsObj, ptsScene, 'Method','Ransac');

% remove outliers reported by RANSAC
inliers = logical(inliers);
m = m(inliers);

% show the final matches
imgMatches = cv.drawMatches(imgObj, keyObj, imgScene, keyScene, m, ...
    'NotDrawSinglePoints',true);
imshow(imgMatches)

% apply the homography to the corner points of the box
[h,w] = size(imgObj);
corners = permute([0 0; w 0; w h; 0 h], [3 1 2]);
p = cv.perspectiveTransform(corners, H);
p = permute(p, [2 3 1]);
p = bsxfun(@plus, p, [size(imgObj,2) 0]);

% draw lines between the transformed corners (the mapped object)
opts = {'Color',[0 255 0], 'Thickness',4};
imgMatches = cv.line(imgMatches, p(1,:), p(2,:), opts{:});
imgMatches = cv.line(imgMatches, p(2,:), p(3,:), opts{:});
imgMatches = cv.line(imgMatches, p(3,:), p(4,:), opts{:});
imgMatches = cv.line(imgMatches, p(4,:), p(1,:), opts{:});
imshow(imgMatches)
title('Matches & Object detection')

matches_homography

现在,您可以尝试使用其他算法进行特征检测/提取(在您的情况下为ORB)。请记住,您可能需要调整上面的一些参数以获得良好的结果(例如,用于控制要保留多少关键点匹配的乘数)。


编辑:

像我说的那样,计算机视觉中没有任何一种尺寸适合所有解决方案。您需要通过调整各种算法参数进行实验,以便在数据上获得良好的结果。例如,ORB constructor接受了许多选项。同样作为文档suggests,具有汉明距离的强力匹配器是ORB描述符的推荐匹配器。

最后请注意,我将RANSAC鲁棒算法指定为用于计算单应矩阵的方法;查看您提供的屏幕截图,您可以看到outlier匹配错误地指向场景中的黑色计算机视觉书。 RANSAC方法的优点是即使数据中存在大量异常值,它也可以准确地执行估计。 findHomography使用的默认方法是使用所有可用点。

此外请注意,在您的情况下,用于估计单应性的一些控制点几乎是共线的,这可能对计算有很大影响(类似于如何在数值上反转接近奇异的矩阵是一个坏主意)。

如上所述,我在代码的相关部分下方突出显示使用ORB描述符给出了很好的结果(其余部分与我之前发布的内容没有变化):

% detect keypoints and calculate descriptors using ORB
[keyObj,featObj] = cv.ORB(imgObj);
[keyScene,featScene] = cv.ORB(imgScene);

% match descriptors using brute force with Hamming distances
matcher = cv.DescriptorMatcher('BruteForce-Hamming');
m = matcher.match(featObj, featScene);

% keep only "good" matches (whose distance is less than k*min_dist )
dist = [m.distance];
m = m(dist < 3*min(dist));

ORB_matching

我注意到你省略了最后一部分,我通过删除不好的东西来过滤匹配。你可以随时查看找到的匹配的“距离”的分布,并决定适当的阈值。这是我最初的想法:

hist([m.distance])
title('Distribution of match distances')

match_distances_distribution

您还可以根据原始关键点的响应值对原始关键点应用类似的处理,并相应地对点进行子采样:

subplot(121), hist([keyObj.response]); title('box')
subplot(122), hist([keyScene.response]); title('scene')

HTH

答案 1 :(得分:2)

图像处理工具箱和计算机视觉系统工具箱中的功能使用不同的惯例来转换您在大多数教科书中看到的点。在大多数教科书中,点以列向量表示。所以你的变换看起来像这样:H * x,其中H是变换矩阵,x是一个矩阵,其列是点。

另一方面,在MATLAB中,这些点通常表示为行向量。所以你必须改变乘法的顺序并转置H:x'* H'。

最后,如果你有MATLAB的计算机视觉系统工具架,你可以用更少的代码解决你的问题。看看这个example

答案 2 :(得分:1)

尝试使用H的转置。

我们将单应矩阵计算为:x'= H * x,但在MATLAB中,它看起来像这样:x'^ {T} = x ^ {T} * H ^ {T}(x'^ { T}表示x')的转置。所以,转换你的单应性并重试。