透视变换不在Matlab中计算场景中的正确线条

时间:2013-12-02 13:15:40

标签: matlab opencv image-processing computer-vision

所以,我试图为OpenCV中给出的homography示例编写等效代码。代码很长但非常简洁,首先它计算对象和场景的探测器和描述符(通过网络摄像头)。然后使用BruteForce Matcher对它们进行比较。然后它选择最佳匹配并使用它来计算对象和场景的homography和之后perspective transform。现在我的问题是perspective transform没有给我一个好结果。由perspective transform获得的坐标似乎挂在(0,0)坐标周围。我在eclipse中使用纯OpenCV运行了类似的代码,从中我看到当我在相机中移动时第一个坐标发生变化,它没有发生。另请注意,计算的homography值略有不同。但是,对我来说代码的逻辑没有问题。但是,矩形区域没有在场景中正确显示。我可以看到在场景上绘制不同的线条,但是它们不适合图像,也应该是。它可能需要一组不同的眼睛。感谢。

function hello

    disp('Feature matching demo. Press any key when done.');

    % Set up camera
    camera = cv.VideoCapture;
    pause(3); % Necessary in some environment. See help cv.VideoCapture

    % Set up display window
    window = figure('KeyPressFcn',@(obj,evt)setappdata(obj,'flag',true));
    setappdata(window,'flag',false);

    object = imread('D:/match.jpg');

    %Conversion from color to gray
    object = cv.cvtColor(object,'RGB2GRAY');

    %Declaring detector and extractor
    detector = cv.FeatureDetector('SURF');
    extractor = cv.DescriptorExtractor('SURF');

    %Calculating object keypoints
    objKeypoints = detector.detect(object);

    %Calculating object descriptors
    objDescriptors = extractor.compute(object,objKeypoints);

    % Start main loop
    while true
        % Grab and preprocess an image
        im = camera.read;
        %im = cv.resize(im,1);
        scene = cv.cvtColor(im,'RGB2GRAY');

        sceneKeypoints = detector.detect(scene);

        %Checking for empty keypoints
        if isempty(sceneKeypoints) 
            continue
        end;

        sceneDescriptors = extractor.compute(scene,sceneKeypoints);

        matcher = cv.DescriptorMatcher('BruteForce');
        matches = matcher.match(objDescriptors,sceneDescriptors);

        objDescriptRow = size(objDescriptors,1);
        dist_arr = zeros(1,objDescriptRow);


        for i=1:objDescriptRow
            dist_arr(i) = matches(i).distance;
        end;


        min_dist = min(dist_arr);

        N = 10000;    
        good_matches = repmat(struct('distance',0,'imgIdx',0,'queryIdx',0,'trainIdx',0), N, 1 );

        goodmatchesSize = 0;

        for i=1:objDescriptRow
            if matches(i).distance < 3 * min_dist
                good_matches(i).distance = matches(i).distance;
                good_matches(i).imgIdx = matches(i).imgIdx;
                good_matches(i).queryIdx = matches(i).queryIdx;
                good_matches(i).trainIdx = matches(i).trainIdx;

                %Recording the number of good matches
                goodmatchesSize = goodmatchesSize +1;
            end
        end

        im_matches = cv.drawMatches(object, objKeypoints, scene, sceneKeypoints,good_matches);

        objPoints = [];
        scnPoints = [];


        %Finding the good matches
        for i=1:goodmatchesSize

            qryIdx = good_matches(i).queryIdx;
            trnIdx = good_matches(i).trainIdx;
            if qryIdx == 0 
                continue 
            end;
            if trnIdx == 0
                continue
            end;

            first_point = objKeypoints(qryIdx).pt;
            second_point = sceneKeypoints(trnIdx).pt;

            objPoints(i,:)= (first_point);
            scnPoints(i,:) = (second_point);

        end

        %Error checking     
        if length(scnPoints) <=4
            continue
        end;
        if length(scnPoints)~= length(objPoints)
            continue
        end;


        % Finding homography of arrays of two sets of points 
        H = cv.findHomography(objPoints,scnPoints);


        objectCorners = [];
        sceneCorners =[];


        objectCorners(1,1) = 0.1;
        objectCorners(1,2) = 0.1;

        objectCorners(2,1) = size(object,2);
        objectCorners(2,2) = 0.1;

        objectCorners(3,1) = size(object,2);
        objectCorners(3,2) = size(object,1);

        objectCorners(4,1) = 0.1;
        objectCorners(4,2) = size(object,1);

        %Transposing the object corners for perpective transform to work
        newObj = shiftdim(objectCorners,-1);

        %Calculating the perspective tranform
        foo =cv.perspectiveTransform(newObj,H);
        sceneCorners = shiftdim(foo,1);

        offset = [];
        offset(1,1) = size(object,2);
        offset(1,2)= 0;


        outimg = cv.line(im_matches,sceneCorners(1,:)+offset,sceneCorners(2,:)+offset);
        outimg = cv.line(outimg,sceneCorners(2,:)+offset,sceneCorners(3,:)+offset);
        outimg = cv.line(outimg,sceneCorners(3,:)+offset,sceneCorners(4,:)+offset);
        outimg = cv.line(outimg,sceneCorners(4,:)+offset,sceneCorners(1,:)+offset);
        imshow(outimg);


     % Terminate if any user input
        flag = getappdata(window,'flag');
        if isempty(flag)||flag, break; end
        pause(0.000000001);
    end

% Close

    close(window);

end

2 个答案:

答案 0 :(得分:0)

首先是明显的问题:

你怎么知道比赛是好的?您是否将它们绘制在图像上以进行验证?你确定你在将它们传递到装配程序时是否正确地订购了它们?

你注意到你得到的单应系数是“略微”不同的,但它们的绝对变化并没有多大意义,因为单应性只是按比例定义。重要的是图像坐标中的重投影错误。

答案 1 :(得分:0)

你需要完整的单应性吗?对于该应用,仿射或甚至相似变换(dx,dy,比例和旋转)可能就足够了。在存在噪声的情况下,更有限的变换将更好地工作。