检测亮点的边缘

时间:2017-03-11 11:21:50

标签: python opencv image-processing computer-vision

我有电路板的X射线图像,我正在尝试分割一些组件并在其中找到空隙(空洞是图像上的亮点)。我成功地分离了组件,但是我无法获得空隙的轮廓。 Input image

到目前为止,我发现它使用拉普拉斯边缘检测器结合高斯和中值滤波器,但仍然检测到太多噪声。我怎么能摆脱它呢? My steps

在第一张图片上,您可以看到我使用OTSU阈值获得的轮廓,这是迄今为止最好的结果,但我不认为这是一个很好的方法,因为用户无论如何都不能影响行为,因为阈值自动计算。在此图像的轮廓之上并未围绕整个空隙(白点)

2到8的图像是我修改图像的步骤。我正在使用高斯和中值模糊,这种增强可能会产生很多噪音,但即使没有它,结果也基本相同。最后一步是拉普拉斯边缘检测器和形态学闭合。

有没有更好的方法?

以下是我的输入论点:

            package.voids.contours, package.voids.hierarchy = self.find_voids_inside_component(
            cropped,
            clahe_clip_limit=1,
            clahe_tile_grid_size=(3, 3),
            laplacian_ksize=11,
            closing_ksize=2,
            closing_iterations=2,
            debug_mode=True,
            fxy=1)

这是函数本身:

def find_voids_inside_component(self,
                                cropped,
                                clahe_clip_limit=2,
                                clahe_tile_grid_size=(3, 3),
                                laplacian_ksize=15,
                                closing_ksize=3,
                                closing_iterations=1,
                                debug_mode=False,
                                fxy=3):
    """
    This fuction calculates the ratio between the void area and the ball area
    :param fxy:
    :param closing_iterations:
    :param closing_ksize:
    :param laplacian_ksize:
    :param clahe_tile_grid_size:
    :param clahe_clip_limit:
    :param cropped:
    :param debug_mode: if True it will print additional info
    :return: contours, hierarchy
    """
    output.debug_show("Original image", cropped, debug_mode=debug_mode, fxy=fxy, waitkey=False)


    # PARAM: Median blur before enhancing image
    # get rid of salt-and-pepper noise using the median filter
    median_blur = cv2.medianBlur(cropped, ksize=5)

    # debug print
    output.debug_show("Median blur 2", median_blur, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # apply the smoothing
    # PARAM: The parameters of the Gaussian blur
    # ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd.
    #         Or, they can be zero’s and then they are computed from sigma* .
    #
    # sigmaX – Gaussian kernel standard deviation in X direction.
    #
    # sigmaY – Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to
    #          sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height ,
    #          respectively (see getGaussianKernel() for details); to fully control the result regardless
    #          of possible future
    #         modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
    #
    # borderType – pixel extrapolation method (see borderInterpolate() for details).
    blur = cv2.GaussianBlur(cropped, (5, 5), 0)

    # debug print
    output.debug_show("Gauss blur", blur, debug_mode=debug_mode, fxy=fxy, waitkey=False)



    # improve the local contrast using CLAHE
    # create a CLAHE object (Arguments are optional).
    # PARAM: Contrast Limited Adaptive Histogram Equalization
    #  clipLimit – Threshold for contrast limiting.
    #  tileGridSize – Size of grid for histogram equalization. Input image will be divided into equally sized
    #                       rectangular tiles. tileGridSize defines the number of tiles in row and column.

    # good values (clipLimit=2.0, tileGridSize=(3, 3))
    clahe = cv2.createCLAHE(clipLimit=clahe_clip_limit, tileGridSize=clahe_tile_grid_size)
    # it is allso possible to use gaussian blur
    cl1 = clahe.apply(median_blur)

    # debug print
    output.debug_show("Enhanced Image", cl1, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # debug print -> convert gray scale to colormap
    color_map = cv2.applyColorMap(cl1, cv2.COLORMAP_JET)
    output.debug_show("Color map", color_map, debug_mode=debug_mode, fxy=fxy, waitkey=False)


    # use some edge detector to get the contours of void
    # color_map = cv2.cvtColor(color_map, cv2.COLOR_BGR2GRAY)
    # PARAM: The parameters of the Gaussian blur
    # ksize – Gaussian kernel size. ksize.width and ksize.height can differ but they both must be positive and odd.
    #         Or, they can be zero’s and then they are computed from sigma* .
    #
    # sigmaX – Gaussian kernel standard deviation in X direction.
    #
    # sigmaY – Gaussian kernel standard deviation in Y direction; if sigmaY is zero, it is set to be equal to
    #          sigmaX, if both sigmas are zeros, they are computed from ksize.width and ksize.height ,
    #          respectively (see getGaussianKernel() for details); to fully control the result regardless
    #          of possible future
    #         modifications of all this semantics, it is recommended to specify all of ksize, sigmaX, and sigmaY.
    #
    # borderType – pixel extrapolation method (see borderInterpolate() for details).
    blur = cv2.GaussianBlur(cl1, (5, 5), 0)

    # PARAM: Laplacian edge detector
    # ddepth – Desired depth of the destination image.
    # ksize – Aperture size used to compute the second-derivative filters. See getDerivKernels() for details.
    #        The size must be positive and odd.
    #
    # scale – Optional scale factor for the computed Laplacian values. By default, no scaling is applied.
    #        See getDerivKernels() for details.
    #
    # delta – Optional delta value that is added to the results prior to storing them in dst .
    #
    # borderType – Pixel extrapolation method. See borderInterpolate() for details.
    # ToDo: Try more edge detectors Solber, Canny
    # edges = cv2.Canny(blur,threshold1=50, threshold2=100)

    edges = cv2.Laplacian(median_blur, cv2.CV_8U, ksize=laplacian_ksize)
    # abs_edges64f = np.absolute(edges)
    # edges_8u = np.uint8(abs_edges64f)

    # debug print
    output.debug_show("Edges", edges, debug_mode=debug_mode, fxy=fxy, waitkey=False)

    # use closing
    kernel = np.ones((closing_ksize, closing_ksize), np.uint8)
    closing = cv2.morphologyEx(edges, cv2.MORPH_CLOSE, kernel, iterations=closing_iterations)

    # debug print
    output.debug_show("Closing", closing, debug_mode=debug_mode, fxy=fxy, waitkey=True)

    # get contours
    im2, contours, hierarchy = cv2.findContours(closing, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)

    # print("Number of contours: ", len(contours))

    return contours, hierarchy

我正在使用Python 3和OpenCV

0 个答案:

没有答案