在冲浪探测器算法中找到兴趣点

时间:2014-05-13 17:28:22

标签: opencv image-processing point emgucv surf

我努力了但是我不知道如何在Emgu CV中找到SURF算法中的单个兴趣点。我为SURF编写了代码。我有一些问题,如果声明在我的编号部分“1”附近并且有时它不是基于不同的图像。为什么会这样?在该单应性的基础上计算不为空。比我能画圆圈或线条。哪个也有问题。在图像上的0,0点处绘制圆或矩形。 请帮我。我将不胜感激。

public Image<Bgr, Byte> Draw(Image<Gray, byte> conditionalImage, Image<Gray, byte> observedImage, out long matchTime)
    {
        //observedImage = observedImage.Resize(, INTER.CV_INTER_LINEAR);
        Stopwatch watch;
        HomographyMatrix homography = null;

        SURFDetector surfCPU = new SURFDetector(500, false);
        VectorOfKeyPoint modelKeyPoints;
        VectorOfKeyPoint observedKeyPoints;
        Matrix<int> indices;

        Matrix<byte> mask;
        int k = 2;
        double uniquenessThreshold = 0.8;
            //extract features from the object image
            modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);

            Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(conditionalImage, null, modelKeyPoints);

            watch = Stopwatch.StartNew();

            // extract features from the observed image
            observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
            Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
            BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
            matcher.Add(modelDescriptors);

            indices = new Matrix<int>(observedDescriptors.Rows, k);
            using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
            {
                matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
                mask = new Matrix<byte>(dist.Rows, 1);
                mask.SetValue(255);
                Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
            }

            int nonZeroCount = CvInvoke.cvCountNonZero(mask);

 //My Section number = 1
            if (nonZeroCount >= 4)
            {
                nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
                if (nonZeroCount >= 4)
                    homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
            }

            watch.Stop();

        //Draw the matched keypoints
            Image<Bgr, Byte> result = Features2DToolbox.DrawMatches(conditionalImage,     modelKeyPoints, observedImage, observedKeyPoints,
                indices, new Bgr(Color.Blue), new Bgr(Color.Red), mask,     Features2DToolbox.KeypointDrawType.DEFAULT);






        #region draw the projected region on the image
        if (homography != null)
        {  //draw a rectangle along the projected model
            Rectangle rect = conditionalImage.ROI;
            PointF[] pts = new PointF[] { 
           new PointF(rect.Left, rect.Bottom),
           new PointF(rect.Right, rect.Bottom),
           new PointF(rect.Right, rect.Top),
           new PointF(rect.Left, rect.Top)};
            homography.ProjectPoints(pts);
            PointF _circleCenter = new PointF();
            _circleCenter.X = (pts[3].X + ((pts[2].X - pts[3].X) / 2));
            _circleCenter.Y = (pts[3].Y + ((pts[0].Y - pts[3].Y) / 2));

            result.Draw(new CircleF(_circleCenter, 15), new Bgr(Color.Red), 10);
            result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round),     true, new Bgr(Color.Cyan), 5);
        }
        #endregion

        matchTime = watch.ElapsedMilliseconds;

        return result;
    }

1 个答案:

答案 0 :(得分:0)

modelKeyPoints = surfCPU.DetectKeyPointsRaw(conditionalImage, null);

在这行代码之后,您在modelKeyPoints中拥有模型Image的所有兴趣点。观察到的图像也是如此。

一旦获得两个图像的关键点,就需要在观察图像中的点与模型图像中的点之间建立关系。为此,请使用knn algorithm

using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
    matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
    mask = new Matrix<byte>(dist.Rows, 1);
    mask.SetValue(255);
    Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}

基本上,对于模型图像中的每个点,这将计算观察图像中的2(k)个最近点。如果2点的距离比小于0.8(uniquenessThreshold),则认为不能安全地匹配该点。对于这个过程,你使用一个既作为输入又作为输出的掩码:作为输入,它表示需要匹配的点,作为输出,它表示正确匹配的点。

然后,掩码中非零值的数量将是匹配的点数。