如何裁掉凸性缺陷?

时间:2016-02-05 14:51:21

标签: opencv image-processing computer-vision contour convexity-defects

我试图从轮廓中检测并精确定位图像中的某些物体。我得到的轮廓通常包括一些噪音(可能形成背景,我不知道)。对象看起来应该类似于矩形或正方形,如:

enter image description here

我通过形状匹配(cv::matchShapes)获得非常好的结果,以检测其中包含这些对象的轮廓,有无噪音,但是在出现噪音的情况下,我的精确定位存在问题。

噪音如下:

例如,

enter image description hereenter image description here

我的想法是发现凸起缺陷,如果它们变得过于强大,就会以某种方式裁掉导致凹陷的部分。检测缺陷是可以的,通常我会得到两个缺陷:不需要的结构"但我仍然坚持如何决定从轮廓中去除点的位置和位置。

这里有一些轮廓,它们的面具(因此你可以很容易地提取轮廓)和包括阈值凸面缺陷的凸包:

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

enter image description here enter image description here enter image description here

我可以只是走过轮廓并在当地决定是否左转弯"由轮廓执行(如果顺时针方向行走),如果是,则删除轮廓点直到下一个左转弯?也许从凸性缺陷开始?

我在寻找算法或代码,编程语言不应该重要,算法更重要。

3 个答案:

答案 0 :(得分:10)

此方法仅适用于点。您无需为此创建蒙版。

主要思想是:

  1. 在轮廓上找到缺陷
  2. 如果我发现至少两个缺陷,找到两个最接近的缺陷
  3. 从轮廓中移除两个最接近的缺陷之间的点
  4. 从新轮廓<1>重新启动
  5. 我得到以下结果。正如您所看到的,平滑缺陷(例如第7张图像)存在一些缺点,但对于明显可见的缺陷非常有用。我不知道这是否能解决你的问题,但可以作为一个起点。在实践中应该非常快(你可以肯定地优化下面的代码,特别是removeFromContour函数)。此外,这种方法的唯一参数是凸度缺陷的数量,因此它适用于小的和大的缺陷斑点。

    enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here

    #include <opencv2/opencv.hpp>
    using namespace cv;
    using namespace std;
    
    int ed2(const Point& lhs, const Point& rhs)
    {
        return (lhs.x - rhs.x)*(lhs.x - rhs.x) + (lhs.y - rhs.y)*(lhs.y - rhs.y);
    }
    
    vector<Point> removeFromContour(const vector<Point>& contour, const vector<int>& defectsIdx)
    {
        int minDist = INT_MAX;
        int startIdx;
        int endIdx;
    
        // Find nearest defects
        for (int i = 0; i < defectsIdx.size(); ++i)
        {
            for (int j = i + 1; j < defectsIdx.size(); ++j)
            {
                float dist = ed2(contour[defectsIdx[i]], contour[defectsIdx[j]]);
                if (minDist > dist)
                {
                    minDist = dist;
                    startIdx = defectsIdx[i];
                    endIdx = defectsIdx[j];
                }
            }
        }
    
        // Check if intervals are swapped
        if (startIdx <= endIdx)
        {
            int len1 = endIdx - startIdx;
            int len2 = contour.size() - endIdx + startIdx;
            if (len2 < len1)
            {
                swap(startIdx, endIdx);
            }
        }
        else
        {
            int len1 = startIdx - endIdx;
            int len2 = contour.size() - startIdx + endIdx;
            if (len1 < len2)
            {
                swap(startIdx, endIdx);
            }
        }
    
        // Remove unwanted points
        vector<Point> out;
        if (startIdx <= endIdx)
        {
            out.insert(out.end(), contour.begin(), contour.begin() + startIdx);
            out.insert(out.end(), contour.begin() + endIdx, contour.end());
        } 
        else
        {
            out.insert(out.end(), contour.begin() + endIdx, contour.begin() + startIdx);
        }
    
        return out;
    }
    
    int main()
    {
        Mat1b img = imread("path_to_mask", IMREAD_GRAYSCALE);
    
        Mat3b out;
        cvtColor(img, out, COLOR_GRAY2BGR);
    
        vector<vector<Point>> contours;
        findContours(img.clone(), contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
    
        vector<Point> pts = contours[0];
    
        vector<int> hullIdx;
        convexHull(pts, hullIdx, false);
    
        vector<Vec4i> defects;
        convexityDefects(pts, hullIdx, defects);
    
        while (true)
        {
            // For debug
            Mat3b dbg;
            cvtColor(img, dbg, COLOR_GRAY2BGR);
    
            vector<vector<Point>> tmp = {pts};
            drawContours(dbg, tmp, 0, Scalar(255, 127, 0));
    
            vector<int> defectsIdx;
            for (const Vec4i& v : defects)
            {
                float depth = float(v[3]) / 256.f;
                if (depth > 2) //  filter defects by depth
                {
                    // Defect found
                    defectsIdx.push_back(v[2]);
    
                    int startidx = v[0]; Point ptStart(pts[startidx]);
                    int endidx = v[1]; Point ptEnd(pts[endidx]);
                    int faridx = v[2]; Point ptFar(pts[faridx]);
    
                    line(dbg, ptStart, ptEnd, Scalar(255, 0, 0), 1);
                    line(dbg, ptStart, ptFar, Scalar(0, 255, 0), 1);
                    line(dbg, ptEnd, ptFar, Scalar(0, 0, 255), 1);
                    circle(dbg, ptFar, 4, Scalar(127, 127, 255), 2);
                }
            }
    
            if (defectsIdx.size() < 2)
            {
                break;
            }
    
            // If I have more than two defects, remove the points between the two nearest defects
            pts = removeFromContour(pts, defectsIdx);
            convexHull(pts, hullIdx, false);
            convexityDefects(pts, hullIdx, defects);
        }
    
    
        // Draw result contour
        vector<vector<Point>> tmp = { pts };
        drawContours(out, tmp, 0, Scalar(0, 0, 255), 1);
    
        imshow("Result", out);
        waitKey();
    
        return 0;
    }
    

    <强>更新

    处理近似轮廓(例如,使用CHAIN_APPROX_SIMPLE中的findContours)可能会更快,但必须使用arcLength()计算轮廓的长度。

    这是要在removeFromContour交换部分中替换的代码段:

    // Check if intervals are swapped
    if (startIdx <= endIdx)
    {
        //int len11 = endIdx - startIdx;
        vector<Point> inside(contour.begin() + startIdx, contour.begin() + endIdx);
        int len1 = (inside.empty()) ? 0 : arcLength(inside, false);
    
        //int len22 = contour.size() - endIdx + startIdx;
        vector<Point> outside1(contour.begin(), contour.begin() + startIdx);
        vector<Point> outside2(contour.begin() + endIdx, contour.end());
        int len2 = (outside1.empty() ? 0 : arcLength(outside1, false)) + (outside2.empty() ? 0 : arcLength(outside2, false));
    
        if (len2 < len1)
        {
            swap(startIdx, endIdx);
        }
    }
    else
    {
        //int len1 = startIdx - endIdx;
        vector<Point> inside(contour.begin() + endIdx, contour.begin() + startIdx);
        int len1 = (inside.empty()) ? 0 : arcLength(inside, false);
    
    
        //int len2 = contour.size() - startIdx + endIdx;
        vector<Point> outside1(contour.begin(), contour.begin() + endIdx);
        vector<Point> outside2(contour.begin() + startIdx, contour.end());
        int len2 = (outside1.empty() ? 0 : arcLength(outside1, false)) + (outside2.empty() ? 0 : arcLength(outside2, false));
    
        if (len1 < len2)
        {
            swap(startIdx, endIdx);
        }
    }
    

答案 1 :(得分:2)

我想出了以下方法来检测矩形/正方形的边界。它的工作原理很少:形状是矩形或方形,它在图像中心,不倾斜。

  • 蒙面(已填充)图像沿x轴分成两半,以便获得两个区域(上半部分和下半部分)
  • 将每个区域的投影放在x轴上
  • 获取这些预测的所有非零条目并取其中位数。这些中位数给你y界限
  • 类似地,将图像沿y轴分成两半,将投影带到y轴,然后计算中位数以得到x边界
  • 使用边界裁剪区域

中间线和样本图像上半部分的投影如下所示。 proj-n-med-line

两个样本的结果边界和裁剪区域: s1 s2

代码在Octave / Matlab中,我在Octave上测试了这个(你需要运行图片包)。

clear all
close all

im = double(imread('kTouF.png'));
[r, c] = size(im);
% top half
p = sum(im(1:int32(end/2), :), 1);
y1 = -median(p(find(p > 0))) + int32(r/2);
% bottom half
p = sum(im(int32(end/2):end, :), 1);
y2 = median(p(find(p > 0))) + int32(r/2);
% left half
p = sum(im(:, 1:int32(end/2)), 2);
x1 = -median(p(find(p > 0))) + int32(c/2);
% right half
p = sum(im(:, int32(end/2):end), 2);
x2 = median(p(find(p > 0))) + int32(c/2);

% crop the image using the bounds
rect = [x1 y1 x2-x1 y2-y1];
cr = imcrop(im, rect);
im2 = zeros(size(im));
im2(y1:y2, x1:x2) = cr;

figure,
axis equal
subplot(1, 2, 1)
imagesc(im)
hold on
plot([x1 x2 x2 x1 x1], [y1 y1 y2 y2 y1], 'g-')
hold off
subplot(1, 2, 2)
imagesc(im2)

答案 2 :(得分:1)

作为一个起点,假设相对于您想要识别的对象,缺陷永远不会太大,您可以在使用cv::matchShapes之前尝试简单的侵蚀+扩张策略,如下所示。

 int max = 40; // depending on expected object and defect size
 cv::Mat img = cv::imread("example.png");
 cv::Mat eroded, dilated;
 cv::Mat element = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(max*2,max*2), cv::Point(max,max));
 cv::erode(img, eroded, element);
 cv::dilate(eroded, dilated, element);
 cv::imshow("original", img);
 cv::imshow("eroded", eroded);
 cv::imshow("dilated", dilated);

enter image description here