查找彼此靠近的对象边界

时间:2019-02-09 09:51:34

标签: python opencv computer-vision opencv3.0 opencv-contour

我正在研究计算机视觉问题,其中的一个步骤是找到对象彼此靠近的位置。例如,在下面的图片中,我很有趣地找到用灰色标记的区域。

输入:

enter image description here

输出:

enter image description here

我目前的方法是先反转图像,然后通过侵蚀进行形态梯度跟随,然后去除一些不有趣的轮廓。脚本如下:

img = cv2.imread('mask.jpg', 0)
img = (255 - img)

kernel = np.ones((11,11), np.uint8) 
gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)

kernel = np.ones((5,5), np.uint8) 
img_erosion = cv2.erode(gradient, kernel, iterations=3) 

img_erosion[img_erosion > 200] = 255
img_erosion[img_erosion <= 200] = 0

def get_contours(mask):
    contours, hierarchy = cv2.findContours(mask,cv2.RETR_TREE,cv2.cv2.CHAIN_APPROX_NONE)
    return contours

cnts = get_contours(img_erosion)

img_new = np.zeros_like(img_erosion)
img_h, img_w = img_erosion.shape
for i in cnts:
    if cv2.contourArea(i) > 30:
        print(cv2.boundingRect(i), cv2.contourArea(i))
        x, y, h, w = cv2.boundingRect(i)
        if h/w > 5 or w/h > 5 or cv2.contourArea(i) > 100:  ## Should be elongated 
            if (x - 10 > 0) and (y - 10 > 0): ## Check if near top or left edge
                if (img_w - x > 10) and (img_h - y > 10): ## Check if near bottom or right edge

                    cv2.drawContours(img_new, [i], -1, (255,255,255), 2)
kernel = np.ones((3,3), np.uint8) 
img_new = cv2.dilate(img_new, kernel, iterations=2)
plt.figure(figsize=(6,6))
plt.imshow(img_new)

结果是:

enter image description here

但是,使用这种方法,我需要调整许多参数,并且在方向不同或边缘稍微偏远或“ L”形边缘等情况下,很多情况下都失败了。

我是图像处理的新手,还有其他方法可以帮助我有效地解决此任务吗?

编辑:附加更多图像

enter image description here

enter image description here

(大多数为矩形​​多边形,但是大小和相对位置有很多变化)

1 个答案:

答案 0 :(得分:3)

最好的 方法是通过Stroke Width Transform。尽管它不在其他一些库中,但它不在OpenCV中,您可以在Internet上找到一些实现。笔划宽度变换可找到图像中每个像素的最近边缘之间的最小宽度。请参见论文下图:

Stroke Width Transform example

对图像进行阈值处理将告诉您边缘之间相隔很小距离的位置。例如,所有值<40的像素都位于两个边缘之间,相隔的距离少于40个像素。

因此,很明显,这很接近您想要的答案。这里会有一些额外的噪音,例如您还将获得形状边缘上的方形脊之间的值...您必须将其过滤掉或平滑掉(轮廓逼近是一种简单的方法,例如,将它们清理为预处理步骤。

但是,尽管我确实编写了SWT原型,但这并不是一个很好的实现,而且我还没有真正对其进行测试(并且实际上忘记了几个月……......可能一年)。因此,我现在不打算将其发布。但是,我确实有另一个想法,它稍微简单一点,不需要阅读研究论文。


您的输入图像中有多个斑点。想象一下,如果您每个人都有自己的形象,并且您将每个Blob增加了您愿意在它们之间放置多大的距离。如果每个斑点增加10个像素,并且它们重叠,那么它们之间的距离将在20像素之内。但这并不能为我们提供完整的重叠区域,而只是两个 expanded 斑点重叠的一部分。一种不同但相似的测量方法是,如果斑点增长了10个像素,并且在原始斑点被扩展之前重叠了,并且还与原始斑点重叠,则两个斑点之间的距离在10像素之内。我们将使用第二个定义来查找附近的斑点。

def find_connection_paths(binimg, distance):

    h, w = binimg.shape[:2]
    overlap = np.zeros((h, w), dtype=np.int32)
    overlap_mask = np.zeros((h, w), dtype=np.uint8)
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (distance, distance))

    # grows the blobs by `distance` and sums to get overlaps
    nlabels, labeled = cv2.connectedComponents(binimg, connectivity=8)
    for label in range(1, nlabels):
        mask = 255 * np.uint8(labeled == label)
        overlap += cv2.dilate(mask, kernel, iterations=1) // 255
    overlap = np.uint8(overlap > 1)

    # for each overlap, does the overlap touch the original blob?
    noverlaps, overlap_components = cv2.connectedComponents(overlap, connectivity=8)
    for label in range(1, noverlaps):
        mask = 255 * np.uint8(overlap_components == label)
        if np.any(cv2.bitwise_and(binimg, mask)):
            overlap_mask = cv2.bitwise_or(overlap_mask, mask)
    return overlap_mask

Connecting regions

现在输出不是完美的-当我扩展Blob时,我用一个圆(膨胀核)向外扩展了它们,因此连接区域并不是十分清晰。但是,这是确保它可以在任何方向上工作的最佳方法。您可能会过滤掉它/将其削减。一种简单的方法是获得每个连接件(以蓝色显示),并反复将其向下侵蚀一个像素,直到它与原始斑点重叠。实际上,让我们添加一下:

def find_connection_paths(binimg, distance):

    h, w = binimg.shape[:2]
    overlap = np.zeros((h, w), dtype=np.int32)
    overlap_mask = np.zeros((h, w), dtype=np.uint8)
    overlap_min_mask = np.zeros((h, w), dtype=np.uint8)
    kernel_dilate = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (distance, distance))

    # grows the blobs by `distance` and sums to get overlaps
    nlabels, labeled = cv2.connectedComponents(binimg)
    for label in range(1, nlabels):
        mask = 255 * np.uint8(labeled == label)
        overlap += cv2.dilate(mask, kernel_dilate, iterations=1) // 255
    overlap = np.uint8(overlap > 1)

    # for each overlap, does the overlap touch the original blob?
    noverlaps, overlap_components = cv2.connectedComponents(overlap)
    for label in range(1, noverlaps):
        mask = 255 * np.uint8(overlap_components == label)
        if np.any(cv2.bitwise_and(binimg, mask)):
            overlap_mask = cv2.bitwise_or(overlap_mask, mask)

    # for each overlap, shrink until it doesn't touch the original blob
    kernel_erode = cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
    noverlaps, overlap_components = cv2.connectedComponents(overlap_mask)
    for label in range(1, noverlaps):
        mask = 255 * np.uint8(overlap_components == label)
        while np.any(cv2.bitwise_and(binimg, mask)):
            mask = cv2.erode(mask, kernel_erode, iterations=1)
        overlap_min_mask = cv2.bitwise_or(overlap_min_mask, mask)

    return overlap_min_mask

Minimum overlapping regions

当然,如果您仍然希望它们更大或更小,则可以对它们进行任何操作,但这看起来非常接近您要求的输出,因此我将其保留在那里。另外,如果您想知道,我不知道右上角的斑点在哪里。我可以稍后再通过另一张。注意,最后两个步骤可以合并;检查是否有重叠(如果有),将其冷却-缩小并存储在面罩中。