我正在尝试将视频帧的关键点与我上传的图像进行比较。这是我的代码。 img参数是上传的图片,pixels_color是来自框架的像素,高度和宽度也来自框架。
int Image::match(ofImage img){
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::BruteForceMatcher<cv::L2<float>> matcher;
vector<cv::DMatch> matches;
matcher.match(descriptors1, descriptors2, matches);
return matches.size();
}
问题是返回始终是keypoints1的大小。我不知道为什么,除非我的keypoints2的大小为0,否则返回值将始终是keypoints1的大小。
如果ofImage img
与框架有关,则无关紧要,它将始终返回keypoints1的大小。这是一个例子:
答案 0 :(得分:0)
我仍然不明白发生了什么,所以我采用了不同的方法,我使用FlannBasedMatcher进行规范化,允许我实现我想要的目标:
int Image::match(ofImage img){
bool result = false;
float neares_neighbor_distance_ratio = 0.7f;
cv::SurfFeatureDetector detector(400);
vector<cv::KeyPoint> keypoints1, keypoints2;
cv::SurfDescriptorExtractor extractor;
cv::Mat descriptors1, descriptors2;
cv::Mat img1(height, width, CV_8UC3, pixels_color);
cv::Mat img2(img.getHeight(), img.getWidth(), CV_8UC3, img.getPixels());
detector.detect(img1, keypoints1);
detector.detect(img2, keypoints2);
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);
cv::FlannBasedMatcher matcher;
vector<vector<cv::DMatch>> matches;
matcher.knnMatch(descriptors1, descriptors2, matches, 2);
vector<cv::DMatch> good_matches;
good_matches.reserve(matches.size());
for(size_t i = 0; i < matches.size(); ++i)
{
if(matches[i].size() < 2)
continue;
const cv::DMatch &m1 = matches[i][0];
const cv::DMatch &m2 = matches[i][1];
if(m1.distance <= neares_neighbor_distance_ratio * m2.distance)
good_matches.push_back(m1);
}
return good_matches.size();
}
这个方法有点大,但我觉得它很有用。