如何应用warperspective(opencv)后发现坐标(0,0)?

时间:2015-02-10 14:36:35

标签: c++ opencv coordinates homography mosaic

我将使用opencv应用warperspective来安装带有varios图像的马赛克,但是,我遇到了一个很大的问题......

当我应用cvWarpPerspective时,生成的图像不会显示在窗口中。 只出现图像的一部分,我需要知道如何在应用warperspective之后发现我的图像的坐标(0,0)。 可以看到,在第一张图像中,如果要与此处显示的第二张图像进行比较,则会切割图像的一部分。

因此,我的问题是:如何在应用warperspective之后发现start的坐标? 我需要帮助来解决这个问题。     如何使用opencv工具解决这个问题? 如何使用opencv解决这个问题?

这是我的代码:

#include <stdio.h>
#include <iostream>

#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/imgproc/imgproc.hpp"

using namespace cv;

void readme();

/** @function main */
int main( int argc, char** argv )
{


// Load the images
 Mat image1= imread( "f.jpg");
 Mat image2= imread( "e.jpg" );
 Mat gray_image1;
 Mat gray_image2;
 // Convert to Grayscale
 cvtColor( image1, gray_image1, CV_RGB2GRAY );
 cvtColor( image2, gray_image2, CV_RGB2GRAY );

imshow("first image",image2);
 imshow("second image",image1);


//-- Step 1: Detect the keypoints using SURF Detector
 int minHessian = 100;

SurfFeatureDetector detector( minHessian );

std::vector< KeyPoint > keypoints_object, keypoints_scene;

detector.detect( gray_image1, keypoints_object );
 detector.detect( gray_image2, keypoints_scene );

//-- Step 2: Calculate descriptors (feature vectors)
 SurfDescriptorExtractor extractor;

Mat descriptors_object, descriptors_scene;

extractor.compute( gray_image1, keypoints_object, descriptors_object );
 extractor.compute( gray_image2, keypoints_scene, descriptors_scene );

//-- Step 3: Matching descriptor vectors using FLANN matcher
 FlannBasedMatcher matcher;
 std::vector< DMatch > matches;
 matcher.match( descriptors_object, descriptors_scene, matches );

double max_dist = 0; double min_dist = 100;

//-- Quick calculation of max and min distances between keypoints
 for( int i = 0; i < descriptors_object.rows; i++ )
 { double dist = matches[i].distance;
 if( dist < min_dist ) min_dist = dist;
 if( dist > max_dist ) max_dist = dist;
 }

printf("-- Max dist : %f \n", max_dist );
 printf("-- Min dist : %f \n", min_dist );

//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist )
 std::vector< DMatch > good_matches;

for( int i = 0; i < descriptors_object.rows; i++ )
 { if( matches[i].distance < 3*min_dist )
 { good_matches.push_back( matches[i]); }
 }
 std::vector< Point2f > obj;
 std::vector< Point2f > scene;

for( int i = 0; i < good_matches.size(); i++ )
 {
 //-- Get the keypoints from the good matches
 obj.push_back( keypoints_object[ good_matches[i].queryIdx ].pt );
 scene.push_back( keypoints_scene[ good_matches[i].trainIdx ].pt );
 }

// Find the Homography Matrix
Mat H = findHomography( obj, scene, CV_RANSAC);
 // Use the Homography Matrix to warp the images
 cv::Mat result;
 warpPerspective(image1,result,H,cv::Size());
 imshow("WARP", result);
 cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows));
 image2.copyTo(half);

 Mat key;
 //drawKeypoints(image1,keypoints_scene,key,Scalar::all(-1), DrawMatchesFlags::DEFAULT );
 //drawMatches(image2, keypoints_scene, image1, keypoints_object, matches, result);

 imshow( "Result", result );

imwrite("teste.jpg", result);
 waitKey(0);
 return 0;
 }

/** @function readme */
 void readme()
 { std::cout << " Usage: Panorama < img1 > < img2 >" << std::endl; }

在此图像中出现第二张图像。看到     enter image description here

我希望我的图片以这种形式出现:
enter image description here

1 个答案:

答案 0 :(得分:1)

以下修改应解决您删除拼接图像黑色部分的问题。

尝试更改此行:

warpPerspective(image1,result,H,cv::Size());

warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows));

这将创建result矩阵,其行数等于image1的行数,从而避免创建不需要的行。