OpenCV warpPerspective的问题

时间:2014-02-24 16:18:39

标签: android c++ opencv image-processing feature-detection

我正在使用移动相机开展Android背景减法项目。我尝试使用功能匹配,findHomography和warpPerspective来查找两帧之间的重叠像素。但是,我得到的输出略有不正确。我对图像处理很陌生,所以我不熟悉所有的术语。我有两个主要问题:

1)warpPerspective的结果过度扭曲 - 例如图像歪斜,图像中的物体被翻转,压扁等等。我该如何解决这个问题?

2)我有时会得到一个' OpenCV错误:断言失败'错误,崩溃我的应用程序。此错误映射到warpPerspective。注意:image1(上一帧)和image2(当前帧)中的尺寸相同。我在检测功能(目前来自RGB)之前将图像转换为灰色。我有时会得到类似的OpenCV断言失败' findHomography出错,但我知道它至少需要4分 - 所以添加一个if语句解决了它,但不知道如何用warpPerspective解决错误。

我得到的错误:

02-24 15:30:49.554: E/cv::error()(4589): OpenCV Error: Assertion failed (type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U)) 
    in void cv::batchDistance(cv::InputArray, cv::InputArray, cv::OutputArray, int, cv::OutputArray, int, int, cv::InputArray, int, bool), 
    file /home/reports/ci/slave_desktop/50-SDK/opencv/modules/core/src/stat.cpp, line 2473

我的代码:

void stitchFrames(){

    //convert frames to grayscale
    image1 = prevFrame.clone();
    image2 = currFrame.clone();

    if(colourSpace==1){ //convert from RGB to gray
        cv::cvtColor(image1, image1Gray,CV_RGB2GRAY);
        cv::cvtColor(image2, image2Gray,CV_RGB2GRAY);
    }
    else if(colourSpace==2){ //convert from HSV to gray
        cv::cvtColor(image1, image1Gray,CV_HSV2RGB);
        cv::cvtColor(image1Gray,image1Gray,CV_RGB2GRAY);
        cv::cvtColor(image2, image1Gray,CV_HSV2RGB);
        cv::cvtColor(image2Gray,image1Gray,CV_RGB2GRAY);
    }

    else if(colourSpace==3){ //no need for conversion
        image1Gray = image1;
        image2Gray = image2;
    }

    //----FEATURE DETECTION----

    //key points
    std::vector<KeyPoint> keypoints1, keypoints2;

    int minHessian;

    cv::FastFeatureDetector detector;

    detector.detect(image1Gray,keypoints1); //prevFrame
    detector.detect(image2Gray,keypoints2); //currFrame

    KeyPoint kp = keypoints2[4];
    Point2f p = kp.pt;
    float i = p.y;

    //---FEATURE EXTRACTION----

    //extracted descriptors
    cv::Mat descriptors1,descriptors2;

    OrbDescriptorExtractor extractor;
    extractor.compute(image1,keypoints1,descriptors1); //prevFrame
    extractor.compute(image2,keypoints2,descriptors2); //currFrame

    //----FEATURE MATCHING----

    //BruteForceMacher

    BFMatcher matcher;

    std::vector< cv::DMatch > matches; //result of matching descriptors
    std::vector< cv::DMatch > goodMatches; //result of sifting matches to get only 'good' matches

    matcher.match(descriptors1,descriptors2,matches);

    //----HOMOGRAPY - WARP-PERSPECTIVE - PERSPECTIVE-TRANSFORM----

    double maxDist = 0.0; //keep track of max distance from the matches
    double minDist = 80.0; //keep track of min distance from the matches

    //calculate max & min distances between keypoints
    for(int i=0; i<descriptors1.rows;i++){
        DMatch match = matches[i];

        float dist = match.distance;
        if (dist<minDist) minDist = dist;
        if(dist>maxDist) maxDist=dist;
    }

    //get only the good matches
    for( int i = 0; i < descriptors1.rows; i++ ){
        DMatch match = matches[i];
        if(match.distance< 500){
            goodMatches.push_back(match);
        }
    }

    std::vector< Point2f > obj;
    std::vector< Point2f > scene;

    //get the keypoints from the good matches
    for( int i = 0; i < goodMatches.size(); i++ ){

        //--keypoints from image1
        DMatch match1 = goodMatches[i];
        int qI1 = match1.trainIdx;
        KeyPoint kp1 = keypoints2[qI1];
        Point2f point1 = kp1.pt;
        obj.push_back(point1);

        //--keypoints from image2
        DMatch match2 = goodMatches[i];
        int qI2 = match2.queryIdx;
        KeyPoint kp2 = keypoints1[qI2];
        Point2f point2 = kp2.pt;
        scene.push_back(point2);

    }

    //calculate the homography matrix
    if(goodMatches.size() >=4){
        Mat H = findHomography(obj,scene, CV_RANSAC);

        warpPerspective(image2,warpResult,H,Size(image1.cols,image1.rows));
    }
}

2 个答案:

答案 0 :(得分:0)

关于你的第一个问题,我认为你提到的失真是由于:

  • 您估算了image1中的单应性H映射坐标到图像2中的坐标。当您执行Mat H = findHomography(obj,scene, CV_RANSAC);时,obj是图像1中的点坐标和{{1}是图像2中的点坐标。

  • 然后在函数scene中使用H,就像它将图像2中的坐标映射到图像1中的坐标一样,因为您希望它将warpPerspective转换为{{ 1}},我猜它应该被拼接到image2

因此,您应该如下评估单应性warpResultimage1

关于你的第二个问题,我认为这是由这条指令提出的:

H

错误表示表达式

Mat H = findHomography(scene, obj, CV_RANSAC);

被发现是错误的,而功能应该是正确的。类似的问题已解决here:在调用matcher.match(descriptors1,descriptors2,matches); 函数之前,您需要手动检查以下内容是否为真:

(type == src2.type() && src1.cols == src2.cols && (type == CV_32F || type == CV_8U))

答案 1 :(得分:0)

关于(1),我的猜测是你估计的单应性是基于不良匹配。

首先,我将首先使用ORB检测器而不是FAST,然后更改findHomography ransacReprojThreshold参数。默认值为3,详细信息:

<强> ransacReprojThreshold:

  

将点对视为异常值时允许的最大重投影错误   (仅用于RANSAC方法)。也就是说,如果:

| dstPoints_i - convertPointsHomogeneous(H * srcPoints_i)| &GT; ransacReprojThreshold

  

那么点i被认为是异常值。如果是srcPoints和dstPoints   以像素为单位测量,设置此参数通常是有意义的   在1到10的范围内。

换句话说,假设默认为3个像素,如果将单应性应用于srcPoint后,它与dstPoint的距离超过3个像素,则该对被认为是一个内部(即:良好)。

这只是一个开始,它也有助于你找到一个更好的匹配和良好的单应性的过滤器,你会找到几个答案:

OpenCV Orb not finding matches..

How can you tell if a homography matrix is acceptable or not?