OpenCV如何计算给定像素的SIFT描述符

时间:2016-09-01 05:54:54

标签: opencv computer-vision sift feature-descriptor

嗨,大家好我使用opencv3和contrib。问题是我想在给定像素处计算筛选描述符(不使用检测到的关键点)。

我试图用给定像素构建KeyPoint向量。但是,要创建KeyPoint,除了像素位置之外,我还需要知道尺寸信息。

KeyPoint (Point2f _pt, float _size, float _angle=-1, float _response=0, int _octave=0, int _class_id=-1)

有人能告诉我构造函数的大小是多少?我是否需要角度信息才能计算筛选描述符?我怎样才能用poencv3来计算它们。

1 个答案:

答案 0 :(得分:2)

@Ukarsh:我同意SIFT描述符需要Keypoint的方向和比例信息这一事实。 David G. Lowe的原始论文(来自尺度不变关键点的独特图像特征)说,"为了实现方向不变性,描述符的坐标和渐变方向相对于关键点方向旋转&# 34; 即可。而比例信息用于在描述符计算期间选择图像的高斯模糊等级

然而,这篇文章的问题是关于计算给定像素的描述符。请注意,给定的像素位置不是使用所需过程计算的SIFT关键点。因此,在这种情况下,方向和比例信息不可用。因此,前一个答案中提到的code以默认比例(即1)和默认方向(不旋转邻域的梯度方向)计算给定像素处的SIFT描述符。

@Teng Long:另外,我认为您用来匹配两个图像(原始图像和旋转图像)中的关键点的方法在某种程度上是模棱两可的。您应该分别对两个图像运行SIFT关键点检测并分别计算它们的相应描述符。然后,您可以对这两组关键点使用强力匹配。

以下代码计算图像上的SIFT关键点及其45度旋转版本,然后使用强力匹配计算SIFT关键点描述符。

# include "opencv2/opencv_modules.hpp"
# include "opencv2/core/core.hpp"
# include "opencv2/features2d/features2d.hpp"
# include "opencv2/highgui/highgui.hpp"
# include "opencv2/nonfree/features2d.hpp"
# include "opencv2\imgproc\imgproc.hpp"
# include <stdio.h>

using namespace cv;

int main( int argc, char** argv )
{
    Mat img_1, img_2;

    // Load image in grayscale format
    img_1 = imread( "scene.jpg", CV_LOAD_IMAGE_GRAYSCALE );     

    // Rotate the input image without loosing the corners 
    Point center = Point(img_1.cols / 2, img_1.rows / 2);
    double angle = 45, scale = 1;
    Mat rot = getRotationMatrix2D(center, angle, scale);
    Rect bbox = cv::RotatedRect(center, img_1.size(), angle).boundingRect();
    rot.at<double>(0,2) += bbox.width/2.0 - center.x;
    rot.at<double>(1,2) += bbox.height/2.0 - center.y;
    warpAffine(img_1, img_2, rot, bbox.size());

    // SIFT feature detector
    SiftFeatureDetector detector;
    std::vector<KeyPoint> keypoints_1, keypoints_2;

    detector.detect( img_1, keypoints_1 );
    detector.detect( img_2, keypoints_2 );

    // Calculate descriptors 
    SiftDescriptorExtractor extractor;
    Mat descriptors_1, descriptors_2;

    extractor.compute( img_1, keypoints_1, descriptors_1 );
    extractor.compute( img_2, keypoints_2, descriptors_2 );

    // Matching descriptors using Brute Force
    BFMatcher matcher(NORM_L2);
    std::vector<DMatch> matches;
    matcher.match(descriptors_1, descriptors_2, matches);


    //-- Quick calculation of max and min distances between Keypoints
    double max_dist = 0; double min_dist = 100;

    for( int i = 0; i < descriptors_1.rows; i++ )
    { double dist = matches[i].distance;
      if( dist < min_dist ) min_dist = dist;
      if( dist > max_dist ) max_dist = dist;
    }   

    // Draw only "good" matches (i.e. whose distance is less than 2*min_dist,
    //-- or a small arbitary value ( 0.02 ) in the event that min_dist is very
    //-- small)
    std::vector< DMatch > good_matches;

    for( int i = 0; i < descriptors_1.rows; i++ )
    { if( matches[i].distance <= max(2*min_dist, 0.02) )
      { good_matches.push_back( matches[i]); }
    }

    //-- Draw only "good" matches
    Mat img_matches;
    drawMatches( img_1, keypoints_1, img_2, keypoints_2,
               good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
               vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //-- Show detected matches
    imshow( "Good Matches", img_matches );

    waitKey(0);
    return 0;
  }

结果如下:

enter image description here