来自具有相同图像的相机的OpenCV匹配图像不会产生100%匹配

时间:2018-04-20 13:16:53

标签: c++ opencv surf

我的目标是将从相机拍摄的图像与某些模型匹配,并找到最接近的图像。但是我觉得我错过了一些东西。 这就是我正在做的事情:首先我从相机中获取一个帧,选择一个部分,使用SURF提取关键点和计算描述符并将它们存储在xml文件中(我还将模型存储为model.png)。这是我的模特。 然后我拿另一帧(在几秒钟内),选择相同的部分,计算描述符并将它们与先前存储的那些相匹配。 结果不是接近100%(我使用好匹配和关键点数量之间的比例),就像我期望的那样。 要进行比较,如果我加载model.png,计算其描述符并与存储的描述符匹配,我得到100%匹配(或多或少),这是合理的。 这是我的代码:

#include <iostream>
#include "opencv2/opencv.hpp"
#include "opencv2/nonfree/nonfree.hpp"

using namespace std;

std::vector<cv::KeyPoint> detectKeypoints(cv::Mat image, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
    std::vector<cv::KeyPoint> keypoints;
    cv::SurfFeatureDetector detector(hessianTh,nOctaves,nOctaveLayers,extended,upright);
    detector.detect(image,keypoints);
    return keypoints; }

cv::Mat computeDescriptors(cv::Mat image,std::vector<cv::KeyPoint> keypoints, int hessianTh, int nOctaves, int nOctaveLayers, bool extended, bool upright) {
    cv::SurfDescriptorExtractor extractor(hessianTh,nOctaves,nOctaveLayers,extended,upright);
    cv::Mat imageDescriptors;
    extractor.compute(image,keypoints,imageDescriptors);
    return imageDescriptors; }

int main(int argc, char *argv[]) {
    cv::VideoCapture cap(0);
    cap.set(CV_CAP_PROP_FRAME_WIDTH, 2304); 
    cap.set(CV_CAP_PROP_FRAME_HEIGHT, 1536); 
    cap >> frame;
    cv::Rect selection(939,482,1063-939,640-482);

    cv::Mat roi = frame(selection).clone();
    //cv::Mat roi=cv::imread("model.png");  
    cv::cvtColor(roi,roi,CV_BGR2GRAY);
    cv::equalizeHist(roi,roi);

    if (std::stoi(argv[1])==1)
    {
        std::vector<cv::KeyPoint> keypoints = detectKeypoints(roi,400,4,2,true,false);
        cv::FileStorage fs("model.xml", cv::FileStorage::WRITE);
        cv::write(fs,"keypoints",keypoints);
        cv::write(fs,"descriptors",computeDescriptors(roi,keypoints,400,4,2,true,false));
        fs.release();
        cv::imwrite("model.png",roi);
    }
    else
    {
        cv::FileStorage fs("model.xml", cv::FileStorage::READ);
        std::vector<cv::KeyPoint> modelkeypoints;
        cv::Mat modeldescriptor;
        cv::FileNode filenode = fs["keypoints"];
        cv::read(filenode,modelkeypoints);
        filenode = fs["descriptors"];
        cv::read(filenode, modeldescriptor);
        fs.release();

        std::vector<cv::KeyPoint> roikeypoints = detectKeypoints(roi,400,4,2,true,false);
        cv::Mat roidescriptor = computeDescriptors(roi,roikeypoints,400,4,2,true,false);

        std::vector<std::vector<cv::DMatch>> matches;
        cv::BFMatcher matcher(cv::NORM_L2);
        if(roikeypoints.size()<modelkeypoints.size())
            matcher.knnMatch(roidescriptor, modeldescriptor, matches, 2);  // Find two nearest matches
        else
            matcher.knnMatch(modeldescriptor, roidescriptor, matches, 2);

        vector<cv::DMatch> good_matches;
        for (int i = 0; i < matches.size(); ++i)
        {
            const float ratio = 0.7;
            if (matches[i][0].distance < ratio * matches[i][1].distance)
            {
                good_matches.push_back(matches[i][0]);
            }
        }

        cv::Mat matching;

        cv::Mat model = cv::imread("model.png");
        if(roikeypoints.size()<modelkeypoints.size())
            cv::drawMatches(roi,roikeypoints,model,modelkeypoints,good_matches,matching);
        else
            cv::drawMatches(model,modelkeypoints,roi,roikeypoints,good_matches,matching);

        cv::imwrite("matches.png",matching);

        float result = static_cast<float>(good_matches.size())/static_cast<float>(roikeypoints.size());
        std::cout << result << std::endl;
    }
    return 0; }

任何建议都会受到赞赏,这让我发疯了......

1 个答案:

答案 0 :(得分:0)

这是预期的,两帧之间的微小变化是你没有获得100%匹配的原因。但是在同一图像上,SURF特征将精确地位于相同的点,并且计算的描述符将是相同的。因此,请调整相机的方法,绘制要素相同时的距离。设置距离的阈值,以便接受大多数(可能95%)的匹配。通过这种方式,您将具有较低的错误匹配率,并且仍然具有较高的真实匹配率。