与OpenCV findHomography和warpPerspective混淆

时间:2015-08-14 08:49:20

标签: image opencv

首先,抱歉我的英语不好。我会尽力表达我的问题。   我正在做一个项目,包括两个图像对齐。我所做的只是检测关键点,匹配这些点并估计这两个图像之间的转换。   这是我的代码:

static void target_region_warping( 
Mat IN  template_image,
Mat IN  input_image,
Mat OUT &warped_image,
int IN  method
)
{
    vector<KeyPoint> kpt1, kpt2;
    vector<Point2f> points1, points2;
    Mat desc1, desc2;
    vector<Point2f> points, points_transformed;
    vector<vector<DMatch> > matches1, matches2;
    vector<DMatch> sym_matches, fm_matches;
    Mat im_show;
    float x, y;
    Mat fundemental;

    // To avoid NaN's when best match has zero distance we will use inversed ratio. 
    const float minRatio = 1.0f / 1.5f;

    // match scheme, sift + ransac
    Ptr<xfeatures2d::SIFT> sift = xfeatures2d::SIFT::create( 1000, 3, 0.004, 20 );
    Ptr<flann::IndexParams> indexParams = makePtr<flann::KDTreeIndexParams>(5); // instantiate LSH index parameters
    Ptr<flann::SearchParams> searchParams = makePtr<flann::SearchParams>(50);       // instantiate flann search parameters
    Ptr<DescriptorMatcher> matcher = makePtr<FlannBasedMatcher>(indexParams, searchParams);

    sift->detectAndCompute( template_image, noArray(), kpt1, desc1 );
    sift->detectAndCompute( input_image, noArray(), kpt2, desc2 );

    // step1: match and remove outliers using ratio
    // KNN match will return 2 nearest matches for each query descriptor
    matcher->knnMatch( desc1, desc2, matches1, 2 );

    // for all matches
    for ( std::vector<std::vector<cv::DMatch>>::iterator matchIterator= matches1.begin(); 
          matchIterator!= matches1.end(); ++matchIterator ) 
    {
        // if 2 NN has been identified
        if (matchIterator->size() > 1) 
        {
            // check distance ratio
            if ( (*matchIterator)[0].distance /
                (*matchIterator)[1].distance > minRatio) 
            {
                matchIterator->clear(); // remove match
            }
        } 
        else { // does not have 2 neighbours
            matchIterator->clear(); // remove match
        }
    }

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, matches1, im_show );
    namedWindow( "SIFT matches: image1 -> image2", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: image1 -> image2", im_show );
#endif

    //step2: image2 -> image1
    matcher->knnMatch( desc2, desc1, matches2, 2 );

    for ( std::vector<std::vector<cv::DMatch>>::iterator matchIterator= matches2.begin();
          matchIterator!= matches2.end(); ++matchIterator ) 
    {
        // if 2 NN has been identified
        if (matchIterator->size() > 1) 
        {
            // check distance ratio
            if ( (*matchIterator)[0].distance/
                (*matchIterator)[1].distance > minRatio) 
            {
                matchIterator->clear(); // remove match
            }
        } 
        else { // does not have 2 neighbours
            matchIterator->clear(); // remove match
        }
    }

    //step3: symmetric matching scheme
    // for all matches image 1 -> image 2
    for ( vector< vector<DMatch> >::const_iterator matchIterator1= matches1.begin();
          matchIterator1!= matches1.end(); ++matchIterator1 ) 
    {
        // ignore deleted matches
        if (matchIterator1->size() < 2)
            continue;
        // for all matches image 2 -> image 1
        for ( std::vector<std::vector<cv::DMatch>>::const_iterator matchIterator2= matches2.begin();
              matchIterator2!= matches2.end(); ++matchIterator2 ) 
        {
            // ignore deleted matches
            if (matchIterator2->size() < 2)
                continue;
            // Match symmetry test
            if ( ( *matchIterator1)[0].queryIdx == ( *matchIterator2 )[0].trainIdx &&
                ( *matchIterator2)[0].queryIdx == ( *matchIterator1 )[0].trainIdx ) 
            {
                // add symmetrical match
                sym_matches.push_back(
                    cv::DMatch( (*matchIterator1)[0].queryIdx,
                    (*matchIterator1)[0].trainIdx,
                    (*matchIterator1)[0].distance));
                break; // next match in image 1 -> image 2
            }
        }
    }

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, sym_matches, im_show );
    namedWindow( "SIFT matches: symmetric matching scheme", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: symmetric matching scheme", im_show );
#endif

    // step4: Identify good matches using RANSAC
    // Return fundemental matrix
    // first, convert keypoints into Point2f
    for ( std::vector<cv::DMatch>::const_iterator it = sym_matches.begin();
          it!= sym_matches.end(); ++it ) 
    {
        // Get the position of left keypoints
        x = kpt1[it->queryIdx].pt.x;
        y = kpt1[it->queryIdx].pt.y;
        points1.push_back( Point2f( x,y ) );

        // Get the position of right keypoints
        x = kpt2[it->trainIdx].pt.x;
        y = kpt2[it->trainIdx].pt.y;
        points2.push_back(cv::Point2f(x,y));
    }

    // Compute F matrix using RANSAC
    std::vector<uchar> inliers(points1.size(),0);

    fundemental = findHomography(
        Mat(points1),
        Mat(points2),
        FM_RANSAC, 
        10, 
        inliers,            
        2000,           
        0.9999 );
    // extract the surviving (inliers) matches
    vector<uchar>::const_iterator itIn= inliers.begin();
    vector<DMatch>::const_iterator itM= sym_matches.begin();
    // for all matches
    for ( ;itIn!= inliers.end(); ++itIn, ++itM) 
    {
        if (*itIn) 
        { // it is a valid match
            fm_matches.push_back(*itM);
        }
    }

#ifdef TARGET_SHOW
    drawMatches( template_image, kpt1, input_image, kpt2, fm_matches, im_show );
    namedWindow( "SIFT matches: RANSAC matching scheme", WINDOW_AUTOSIZE );
    imshow( "SIFT matches: RANSAC matching scheme", im_show );
#endif

    // step5: warp image 1 to image 2
    cv::warpPerspective( input_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image
}

我的代码中的step5有一些问题。也就是说,通过估计从template_image到input_image的转换得到矩阵“fundemental”。所以正确的调用方法应该是

// may I sign this "1"
cv::warpPerspective( template_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image

而不是

// I sign this "2"
cv::warpPerspective( input_image, // input image
        warped_image, // output image
        fundemental, // homography
        input_image.size(),
        cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); // size of output image

然而,实际上当我使用absdiff方法来测试这样的结果时:

// test method "1"
absdiff( warped_image, input_image, diff_image );
// test method "2"
absdiff( warped_image, template_image, diff_image );

我惊奇地发现错误的调用方法“2”产生了更好的结果,即“2”中的diff_image在“1”中有更多的零元素。 我不知道出了什么问题,在理解“findHomograhpy”方法时我有些错误吗?我需要一些帮助,谢谢!

1 个答案:

答案 0 :(得分:1)

请尝试以下两个版本:

cv::warpPerspective( template_image, // input image
    warped_image, // output image
    fundemental, // homography
    input_image.size(), // size of output image
    cv::INTER_CUBIC );  // HERE, INVERSE FLAG IS REMOVED

cv::warpPerspective( template_image, // input image
    warped_image, // output image
    fundemental.inv(), // homography, HERE: INVERTED HOMOGRAPHY AS INPUT
    input_image.size(), // size of output image
    cv::WARP_INVERSE_MAP | cv::INTER_CUBIC ); 

标志cv::WARP_INVERSE_MAP表示已经传递了反转变换的openCV函数。 图像变形总是反向进行,因为您要确保每个输出图像的像素只有一个合法值。

因此,要从源图像到目标图像扭曲,您要么提供从源到目标图像的单应性,这意味着openCV将反转该变换,或者您提供从目标到源的单应性并且将openCV信号化为已经被反转

http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#void%20warpPerspective%28InputArray%20src,%20OutputArray%20dst,%20InputArray%20M,%20Size%20dsize,%20int%20flags,%20int%20borderMode,%20const%20Scalar&%20borderValue%29

  

设置标志WARP_INVERSE_MAP时。否则,首先使用invert()反转变换,然后将其放入上面的公式而不是M。