查找图像处理中的阈值限制

时间:2017-07-21 18:38:20

标签: python algorithm opencv image-processing

正如Stackoverflow的帮助部分所述,人们可以询问软件算法,"我相信这个问题是关于话题的。我正在查看以下算法,并且我很难理解它的使用原因。我已经解释了下面的机制。代码来自以下github repo

import numpy as np
import cv2
import sys


def calc_sloop_change(histo, mode, tolerance):
    sloop = 0
    for i in range(0, len(histo)):
       if histo[i] > max(1, tolerance):
          sloop = i
          return sloop
       else:
          sloop = i


def process(inpath, outpath, tolerance):
    original_image = cv2.imread(inpath)
    tolerance = int(tolerance) * 0.01

    #Get properties
    width, height, channels = original_image.shape

    color_image = original_image.copy()

    blue_hist = cv2.calcHist([color_image], [0], None, [256], [0, 256])
    green_hist = cv2.calcHist([color_image], [1], None, [256], [0, 256])
    red_hist = cv2.calcHist([color_image], [2], None, [256], [0, 256])

    blue_mode = blue_hist.max()
    blue_tolerance = np.where(blue_hist == blue_mode)[0][0] * tolerance
    green_mode = green_hist.max()
    green_tolerance = np.where(green_hist == green_mode)[0][0] * tolerance
    red_mode = red_hist.max()
    red_tolerance = np.where(red_hist == red_mode)[0][0] * tolerance

    sloop_blue = calc_sloop_change(blue_hist, blue_mode, blue_tolerance)
    sloop_green = calc_sloop_change(green_hist, green_mode, green_tolerance)
    sloop_red = calc_sloop_change(red_hist, red_mode, red_tolerance)

    gray_image = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
    gray_hist = cv2.calcHist([original_image], [0], None, [256], [0, 256])

    largest_gray = gray_hist.max()
    threshold_gray = np.where(gray_hist == largest_gray)[0][0]

    #Red cells
    gray_image = cv2.adaptiveThreshold(gray_image, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 85, 4)

    _, contours, hierarchy = cv2.findContours(gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

    c2 = [i for i in contours if cv2.boundingRect(i)[3] > 15]
    cv2.drawContours(color_image, c2, -1, (0, 0, 255), 1)

    cp = [cv2.approxPolyDP(i, 0.015 * cv2.arcLength(i, True), True) for i in c2]

    countRedCells = len(c2)

    for c in cp:
       xc, yc, wc, hc = cv2.boundingRect(c)
       cv2.rectangle(color_image, (xc, yc), (xc + wc, yc + hc), (0, 255, 0), 1)

    #Malaria cells
    gray_image = cv2.inRange(original_image, np.array([sloop_blue, sloop_green, sloop_red]), np.array([255, 255, 255]))

    _, contours, hierarchy = cv2.findContours(gray_image, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

    c2 = [i for i in contours if cv2.boundingRect(i)[3] > 8]
    cv2.drawContours(color_image, c2, -1, (0, 0, 0), 1)

    cp = [cv2.approxPolyDP(i, 0.15 * cv2.arcLength(i, True), True) for i in c2]

    countMalaria = len(c2)

    for c in cp:
       xc, yc, wc, hc = cv2.boundingRect(c)
       cv2.rectangle(color_image, (xc, yc), (xc + wc, yc + hc), (0, 0, 0), 1)

    #Write image
    cv2.imwrite(outpath, color_image)

    #Write statistics
    with open(outpath + '.stats', mode='w') as f:
       f.write(str(countRedCells) + '\n')
       f.write(str(countMalaria) + '\n')

上面的代码查看单元格(不规则形状)的图像,并确定其中是否有黑点/斑点。然后,它在细胞和斑点周围绘制轮廓。例如:enter image description here

我不明白为什么算法的工作方式如下:

让我举个例子来说明: 让我们说我的process()容差是50。让我们说blue_hist返回一个数组[1,2,3,4,100,0,...,0并且该数组中的最大值在索引4处为100.这表示当仅提取蓝色信号时,在彩色图像的灰度版本中存在100个像素,强度为4。在这种情况下,函数where(blue_hist = blue_mode)将返回4。该值乘以0.01*tolerance,我们为2

因此,如果值4是像素强度值,那么将它乘以标量只会得到另一个像素强度值(在我们的例子中,(4 * (0.01*50)) = 2。这个新的像素强度传递给{{1在这个函数中,比较calc_sloop_change(),它返回强度histo[i]的像素数和容差(我们之前计算的像素值)。所以在我们的例子中,第一个值大于2 <{1}}时发生。因此返回i

这是我感到困惑的地方。为什么要这样做?比较像素数与像素强度似乎不合逻辑。它们甚至不是同一个实体。那么,为什么他们使用这个算法呢?我必须补充一点,这段代码实际上表现得非常好。所以一定是正确的。

最后,由i=3计算的三个值(每个颜色信号一个)充当较低的截止值以产生二进制图像。任何小于这些值(实际上是像素强度值)的东西都会变成黑色,高于这些值的所有东西都会变成白色。

0 个答案:

没有答案