匹配图像的特定元素;已知形状OpenCV C ++

时间:2015-02-04 12:08:18

标签: c++ opencv image-recognition keypoint canny-operator

在得不到this question的答案后,我最终遇到了一些有趣的可能解决方案:

来自this post的强大匹配器,以及this post中的Canny探测器。

在设置Canny Edge Detector,引用其Documentation并实施我链接的第一页中显示的Robust Matcher之后,我获得了一些徽标/服装图片并获得了一些不错的成功两者相结合:

Picture of a logo matching with a picture of an item of clothing with that logo on it

但在其他非常相似的情况下,它已经关闭了:

不同的徽标图片与"确切"相同的设计,与上述相同的服装形象。

所以让我想知道,有没有一种方法可以匹配图像上的几个特定点来定义给定图像的某些区域?

因此,不要将图像读入,然后对keypoints进行所有匹配,丢弃&#34;坏&#34; keypoints等。是否可以让系统知道keypoint与另一个cv::Mat ransacTest(const std::vector<cv::DMatch>& matches, const std::vector<cv::KeyPoint>& trainKeypoints, const std::vector<cv::KeyPoint>& testKeypoints, std::vector<cv::DMatch>& outMatches){ // Convert keypoints into Point2f std::vector<cv::Point2f> points1, points2; cv::Mat fundemental; for (std::vector<cv::DMatch>::const_iterator it= matches.begin(); it!= matches.end(); ++it){ // Get the position of left keypoints float x= trainKeypoints[it->queryIdx].pt.x; float y= trainKeypoints[it->queryIdx].pt.y; points1.push_back(cv::Point2f(x,y)); // Get the position of right keypoints x= testKeypoints[it->trainIdx].pt.x; y= testKeypoints[it->trainIdx].pt.y; points2.push_back(cv::Point2f(x,y)); } // Compute F matrix using RANSAC std::vector<uchar> inliers(points1.size(), 0); if (points1.size() > 0 && points2.size() > 0){ cv::Mat fundemental= cv::findFundamentalMat( cv::Mat(points1),cv::Mat(points2), inliers, CV_FM_RANSAC, distance, confidence); // matching points - match status (inlier or outlier) - RANSAC method - distance to epipolar line - confidence probability - extract the surviving (inliers) matches std::vector<uchar>::const_iterator itIn= inliers.begin(); std::vector<cv::DMatch>::const_iterator itM= matches.begin(); // for all matches for ( ;itIn!= inliers.end(); ++itIn, ++itM){ if (*itIn) { // it is a valid match outMatches.push_back(*itM); } } if (refineF){ // The F matrix will be recomputed with // all accepted matches // Convert keypoints into Point2f // for final F computation points1.clear(); points2.clear(); for(std::vector<cv::DMatch>::const_iterator it = outMatches.begin(); it!= outMatches.end(); ++it){ // Get the position of left keypoints float x = trainKeypoints[it->queryIdx].pt.x; float y = trainKeypoints[it->queryIdx].pt.y; points1.push_back(cv::Point2f(x,y)); // Get the position of right keypoints x = testKeypoints[it->trainIdx].pt.x; y = testKeypoints[it->trainIdx].pt.y; points2.push_back(cv::Point2f(x,y)); } // Compute 8-point F from all accepted matches if (points1.size() > 0 && points2.size() > 0){ fundemental= cv::findFundamentalMat(cv::Mat(points1),cv::Mat(points2), CV_FM_8POINT); // 8-point method } } } Mat imgMatchesMat; drawMatches(trainCannyImg, trainKeypoints, testCannyImg, testKeypoints, outMatches, imgMatchesMat);//, Scalar::all(-1), Scalar::all(-1), vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); Mat H = findHomography(points1, points2, CV_RANSAC, 3); // -- Little difference when CV_RANSAC is changed to CV_LMEDS or 0 //-- Get the corners from the image_1 (the object to be "detected") std::vector<Point2f> obj_corners(4); obj_corners[0] = cvPoint(0,0); obj_corners[1] = cvPoint(trainCannyImg.cols, 0); obj_corners[2] = cvPoint(trainCannyImg.cols, trainCannyImg.rows); obj_corners[3] = cvPoint(0, trainCannyImg.rows); std::vector<Point2f> scene_corners(4); perspectiveTransform(obj_corners, scene_corners, H); //-- Draw lines between the corners (the mapped object in the scene - image_2 ) line(imgMatchesMat, scene_corners[0] + Point2f(trainCannyImg.cols, 0), scene_corners[1] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4); line(imgMatchesMat, scene_corners[1] + Point2f(trainCannyImg.cols, 0), scene_corners[2] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4); line(imgMatchesMat, scene_corners[2] + Point2f(trainCannyImg.cols, 0), scene_corners[3] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4); line(imgMatchesMat, scene_corners[3] + Point2f(trainCannyImg.cols, 0), scene_corners[0] + Point2f(trainCannyImg.cols, 0), Scalar(0, 255, 0), 4); //-- Show detected matches imshow("Good Matches & Object detection", imgMatchesMat); waitKey(0); return fundemental; } 相对的位置,然后丢弃一个图像上彼此相邻的匹配,但是在完全不同的位置上另一个?

(如浅蓝色和宝蓝色&#34;匹配&#34;左图中彼此相邻,但在右图完全独立的部分匹配)

修改

对于Micka

enter image description here

&#34;矩形&#34;画在(在油漆中添加)白盒子的中心。

Object (52, 37)
Scene  (219, 151)
Object (49, 47)
Scene  (241,139)
Object (51, 50)
Scene  (242, 141)
Object (37, 53)
Scene  (228, 145)
Object (114, 37.2)
Scene  (281, 162)
Object (48.96, 46.08)
Scene  (216, 160.08)
Object (44.64, 54.72)
Scene  (211.68, 168.48)

同形输出

略有不同的输入场景(不断变换的东西,需要很长时间才能找出完全重复图像的确切条件)但结果相同:

{{1}}

有问题的图片:

enter image description here

0 个答案:

没有答案