在我的opencv项目中,我想检测图像中的复制移动伪造。我知道如何在2个不同的图像中使用opencv FLANN进行特征匹配,但是我对如何使用FLANN进行图像中的检测复制 - 移动伪造感到困惑。
P.S1:我得到了图像的筛选关键点和描述符,并坚持使用特征匹配类。
P.S2:特征匹配的类型对我来说并不重要。
提前致谢。
更新:
这些图片是我需要的一个例子
并且有一个代码可以匹配两个图像的功能并在两个图像(不是一个)上执行类似的操作,android原生opencv格式的代码如下所示:
vector<KeyPoint> keypoints;
Mat descriptors;
// Create a SIFT keypoint detector.
SiftFeatureDetector detector;
detector.detect(image_gray, keypoints);
LOGI("Detected %d Keypoints ...", (int) keypoints.size());
// Compute feature description.
detector.compute(image, keypoints, descriptors);
LOGI("Compute Feature ...");
FlannBasedMatcher matcher;
std::vector< DMatch > matches;
matcher.match( descriptors, descriptors, matches );
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors.rows; i++ )
{ double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
printf("-- Max dist : %f \n", max_dist );
printf("-- Min dist : %f \n", min_dist );
//-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
//-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
//-- small)
//-- PS.- radiusMatch can also be used here.
std::vector< DMatch > good_matches;
for( int i = 0; i < descriptors.rows; i++ )
{ if( matches[i].distance <= max(2*min_dist, 0.02) )
{ good_matches.push_back( matches[i]); }
}
//-- Draw only "good" matches
Mat img_matches;
drawMatches( image, keypoints, image, keypoints,
good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
//-- Show detected matches
// imshow( "Good Matches", img_matches );
imwrite(imgOutFile, img_matches);
答案 0 :(得分:2)
我不知道在这个问题上使用关键点是否是个好主意。我宁愿测试template matching(使用图片上的滑动窗口作为补丁)。与关键点相比,该方法具有对旋转和缩放敏感的缺点。
如果您想使用关键点,您可以:
knnMatch
函数(cv::BFMatcher
)计算与其他所有关键点的匹配分数,保持区分点之间的匹配,即距离大于零(或阈值)的点。
int nknn = 10; // max number of matches for each keypoint
double minDist = 0.5; // distance threshold
// Match each keypoint with every other keypoints
cv::BFMatcher matcher(cv::NORM_L2, false);
std::vector< std::vector< cv::DMatch > > matches;
matcher.knnMatch(descriptors, descriptors, matches, nknn);
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
// Compute distance and store distant matches
std::vector< cv::DMatch > good_matches;
for (int i = 0; i < matches.size(); i++)
{
for (int j = 0; j < matches[i].size(); j++)
{
// The METRIC distance
if( matches[i][j].distance> max(2*min_dist, 0.02) )
continue;
// The PIXELIC distance
Point2f pt1 = keypoints[queryIdx].pt;
Point2f pt2 = keypoints[trainIdx].pt;
double dist = cv::norm(pt1 - pt2);
if (dist > minDist)
good_matches.push_back(matches[i][j]);
}
}
Mat img_matches;
drawMatches(image_gray, keypoints, image_gray, keypoints, good_matches, img_matches);