OpenCV Keypoint匹配DMatch距离变量

时间:2014-10-24 13:01:03

标签: opencv computer-vision feature-detection

我正在研究Features2D + Homography to find a known object OpenCV教程中的代码..

我不清楚,matcher类中的距离变量是什么。它是两个图像中匹配关键点的像素之间的距离吗?

QA表示它的相似性度量(欧几里德距离或汉明距离包含二进制描述符)并根据描述符向量之间的距离计算。

如果不使用OpenCV中的现有匹配器,某些正文可以分享如何计算此距离或如何匹配关键点。

 //-- Step 3: Matching descriptor vectors using FLANN matcher
  FlannBasedMatcher matcher;
  std::vector< DMatch > matches;
  matcher.match( descriptors_object, descriptors_scene, matches );

  double max_dist = 0; double min_dist = 100;

  //-- Quick calculation of max and min distances between keypoints
  for( int i = 0; i < descriptors_object.rows; i++ )
  { double dist = matches[i].distance;  // --> What Distance indicate here
    if( dist < min_dist ) min_dist = dist;
    if( dist > max_dist ) max_dist = dist;
  }

enter image description here

感谢。

1 个答案:

答案 0 :(得分:1)

我在使用SIFT特征检测器进行实时对象匹配时遇到了一些问题。这是我的视频解决方案。

首先,我创建了一个存储匹配关键点的结构。结构包含templateImage中关键点的位置,inputImage中关键点的位置和相似性度量。这里我使用了矢量的互相关作为相似度量。

struct MatchedPair
    {
        Point locationinTemplate;
        Point matchedLocinImage;
        float correlation;
        MatchedPair(Point loc)
        {
            locationinTemplate=loc;
        }
    }

我将根据它们的相似性选择对匹配的关键点进行排序,因此我需要一个帮助函数来告诉std::sort()如何比较我的MatchedPair个对象。

bool comparator(MatchedPair a,MatchedPair b)
{
        return a.correlation>b.correlation;
}

现在主代码开始了。我使用标准方法来检测和解密来自输入图像和templateImage的功能。在计算功能之后,我已经实现了我自己的匹配功能。这是您正在寻找的答案

 int main()
    {
        Mat templateImage = imread("template.png",IMREAD_GRAYSCALE); // read a template image
        VideoCapture cap("input.mpeg"); 
        Mat frame; 

        vector<KeyPoint> InputKeypts,TemplateKeypts; 
        SiftFeatureDetector detector;
        SiftDescriptorExtractor extractor;
        Mat InputDescriptor,templateDescriptor,result; 
        vector<MatchedPair> mpts; 
        Scalar s;
        cap>>frame; 
        cvtColor(image,image,CV_BGR2GRAY);
        Mat outputImage =Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1);
        detector.detect(templateImage,TemplateKeypts); // detect template interesting points
        extractor.compute(templateImage,TemplateKeypts,templateDescriptor); 

        while( true) 
        {
            mpts.clear(); // clear for new frame
            cap>>frame;  // read video to frame
            outputImage=Mat::zeros(templateImage.rows+frame.rows,templateImage.cols+frame.cols,CV_8UC1); // create output image 
            cvtColor(frame,frame,CV_BGR2GRAY);
            detector.detect(frame,InputKeypts);
            extractor.compute(frame,InputKeypts,InputDescriptor); // detect and descrypt frames features

            /*
                So far we have computed descriptors for template and current frame using traditional methods
                From now onward we are going to implement our own match method

     - Descriptor matrixes are by default have 128 colums to hold features of a keypoint.    
     - Each row in descriptor matrix represent 128 feature of a keypoint.

 Match methods are using this descriptor matrixes to calculate similarity.

My approach to calculate similarity is using cross correlation of keypoints descriptor vector.Check code below to see how I achieved.
        */

   // Iterate over rows of templateDesciptor ( for each keypoint extracted from     //  template Image)   i keypoints in template,j keypoints in input
            for ( int i=0;i<templateDescriptor.rows;i++)
            {
                mpts.push_back(MatchedPair(TemplateKeypts[i].pt));
                mpts[i].correlation =0;
                for ( int j=0;j<InputDescriptor.rows;j++)
                {
                    matchTemplate(templateDescriptor.row(i),InputDescriptor.row(j),result,CV_TM_CCOR_NORMED);
 // I have used opencvs built function to calculate correlation.I am calculating // row(i) of templateDescriptor with row(j) of inputImageDescriptor.
                    s=sum(result); // sum is correlation of two rows
// Here I am looking for the most similar row in input image.Storing the correlation of best match and matchLocation in input image.
                    if(s.val[0]>mpts[i].correlation)
                    {
                       mpts[i].correlation=s.val[0];
                       mpts[i].matchedLocinImage=InputKeypts[j].pt;
                    }
                }

            }

// I would like to show template,input and matching lines in one output.            templateImage.copyTo(outputImage(Rect(0,0,templateImage.cols,templateImage.rows)));
            frame.copyTo(outputImage(Rect(templateImage.cols,templateImage.rows,frame.cols,frame.rows)));

  // Here is the matching part. I have selected 4 best matches and draw lines         // between them. You should check for correlation value again because there can // be 0 correlated match pairs.

            std::sort(mpts.begin(),mpts.end(),comparator);
            for( int i=0;i<4;i++)
            {

                if ( mpts[i].correlation>0.90)
                {
// During drawing line take into account offset of locations.I have added 
// template image to upper left of input image in output image.  
            cv::line(outputImage,mpts[i].locationinTemplate,mpts[i].matchedLocinImage+Point(templateImage.cols,templateImage.rows),Scalar::all(255));
                }
            }
            imshow("Output",outputImage);
            waitKey(33);
        }

    }