使用python中的openCV进行多线程图像处理

时间:2019-12-29 15:03:18

标签: python python-3.x numpy opencv python-multithreading

我对python还是很陌生,并且在使我的算法的一部分瘫痪时遇到了问题。 考虑需要以某种方式在像素级别上作为阈值的输入图像。由于该算法仅考虑特定区域来计算阈值,因此我想在单独的线程/进程中运行图像的每个块。这就是我被困住的地方。我找不到这些线程在同一图像上工作的方法,也找不到如何将结果合并到新图像中的方法。 因为我通常来自Java世界,所以我通常会解决不想干扰其他线程的问题。因此,我只是尝试通过每个过程传递图像。

def thresholding(img):
    stepSize = int(img.shape[0] / 10)
    futures = []
    with ProcessPoolExecutor(max_workers=4) as e:
        for y in range(0, img.shape[0], stepSize):
            for x in range(0, img.shape[1], stepSize):
                futures.append(e.submit(thresholdThread, y, x, img))
    concurrent.futures.wait(futures)
    return img


def thresholdThread(y, x, img):
    window_size = int(img.shape[0] / 10)
    window_shape = (window_size, window_size)
    window = img[y:y + window_shape[1], x:x + window_shape[0]]
    upper_bound, lower_bound, avg = getThresholdBounds(window, 0.6)

    for y_2 in range(0, window.shape[0]):
        for x_2 in range(0, window.shape[1]):
            tmp = img[y + y_2, x + x_2]
            img[y + y_2, x + x_2] = tmp if (tmp >= upper_bound or tmp <= lower_bound) else avg
    return str(avg)

据我了解,python无法正常工作,因为每个进程都有自己的img副本。但是由于img的类型是numpy的float的ndarray类型,所以我不知道是否以及如何使用共享对象(here)。

仅供参考:我正在使用python 3.6.9。我确实知道3.7已发布,但是要再次安装所有东西以便使用spyder和openCV并不是那么容易。

1 个答案:

答案 0 :(得分:3)

您没有利用Numpy的任何矢量化技术,这些技术可以显着减少处理时间。我假设这就是为什么您要在映像的Windows /块上进行多进程操作的原因-我不知道Docker是什么,所以我不知道这是否是您的多进程方法中的一个因素。

这是一个向量化解决方案,但有一项警告,即可能从操作中排除底部和右侧边缘 pixels 。如果那是不可接受的,则无需进一步阅读。

您的示例中的右侧和底部边缘窗口的大小很有可能与其他窗口不同。看起来您是随意选择了10来构成图像的-如果10是任意选择,则可以轻松优化底部和右侧边缘的变化-我将在答案的末尾发布该函数。

需要将图像重新整形为补丁,以对操作进行矢量化处理。我使用了sklearn function sklearn.feature_extraction.image._extract_patches,因为它很方便并且允许创建非重叠的补丁(这似乎是您想要的)。请注意下划线前缀-以前是 expanded 函数image.extract_patches,但已过时。该函数使用numpy.lib.stride_tricks.as_strided-可能只是reshape数组,但我没有尝试过。

设置

import numpy as np
from sklearn.feature_extraction import image
img = np.arange(4864*3546*3).reshape(4864,3546,3)
# all shape dimensions in the following example derived from img's shape

定义补丁大小(请参见下面的opt_size)并重塑图像。

hsize, h_remainder, h_windows = opt_size(img.shape[0])
wsize, w_remainder, w_windows = opt_size(img.shape[1])

# rgb - not designed for rgba
if img.ndim == 3:
    patch_shape = (hsize,wsize,img.shape[-1])
else:
    patch_shape = (hsize,wsize)

patches = image._extract_patches(img,patch_shape=patch_shape,
                                 extraction_step=patch_shape)
patches = patches.squeeze()

patches是原始数组更改的视图,它将在原始数组中看到。其形状为(8, 9, 608, 394, 3),有8x9(608,394,3)个窗口/补丁。

找到每个补丁的上限和下限;比较每个像素与其补丁的边界;提取边界之间需要更改的每个像素的索引。

lower = patches.min((2,3)) * .6
lower = lower[...,None,None,:]
upper = patches.max((2,3)) * .6
upper = upper[...,None,None,:]
indices = np.logical_and(patches > lower, patches < upper).nonzero()

找到每个色块的均值,然后更改所需的像素值,

avg = patches.mean((2,3))    # shape (8,9,3)
patches[indices] = avg[indices[0],indices[1],indices[-1]]

将所有功能组合在一起的功能

def g(img, opt_shape=False):
    original_shape = img.shape

    # determine patch shape   
    if opt_shape:
        hsize, h_remainder, h_windows = opt_size(img.shape[0])
        wsize, w_remainder, w_windows = opt_size(img.shape[1])
    else:
        patch_size = img.shape[0] // 10
        hsize, wsize = patch_size,patch_size

    # constraint checking here(?) for
    #     number of windows,
    #     orphaned pixels

    if img.ndim == 3:
        patch_shape = (hsize,wsize,img.shape[-1])
    else:
        patch_shape = (hsize,wsize)

    patches = image._extract_patches(img,patch_shape=patch_shape,
                                     extraction_step=patch_shape)
    #squeeze??
    patches = patches.squeeze()

    #assume color (h,w,3)
    lower = patches.min((2,3)) * .6
    lower = lower[...,None,None,:]
    upper = patches.max((2,3)) * .6
    upper = upper[...,None,None,:]
    indices = np.logical_and(patches > lower, patches < upper).nonzero()

    avg = patches.mean((2,3))
##    del lower, upper, mask
    patches[indices] = avg[indices[0],indices[1],indices[-1]]

def opt_size(size):
    '''Maximize number of windows, minimize loss at the edge

    size -> int
       Number of "windows" constrained to 4-10
       Returns (int,int,int)
           size in pixels,
           loss in pixels,
           number of windows
    '''

    size = [(divmod(size,n),n) for n in range(4,11)]
    n_windows = 0
    remainder = 99
    patch_size = 0
    for ((p,r),n) in size:
        if r <= remainder and n > n_windows:
            remainder = r
            n_windows = n
            patch_size = p
    return patch_size, remainder, n_windows

针对您的天真过程进行了测试-我希望我能够正确执行它。将4864x3546彩色图像的分辨率提高了约35倍。可能还有进一步的优化,也许有些向导会发表评论。

使用10的块因子进行测试:

#yours
def f(img):
    window_size = int(img.shape[0] / 10)
    window_shape = (window_size, window_size)

    for y in range(0, img.shape[0], window_size):
        for x in range(0, img.shape[1], window_size):

            window = img[y:y + window_shape[1], x:x + window_shape[0]]
            upper_bound = window.max((0,1)) * .6
            lower_bound = window.min((0,1)) * .6
            avg = window.mean((0,1))

            for y_2 in range(0, window.shape[0]):
                for x_2 in range(0, window.shape[1]):
                    tmp = img[y + y_2, x + x_2]
                    indices = np.logical_and(tmp < upper_bound,tmp > lower_bound)
                    tmp[indices] = avg[indices]


img0 = np.arange(4864*3546*3).reshape(4864,3546,3)
#get everything the same shape
size = img0.shape[0] // 10
h,w = size*10, size * (img0.shape[1]//size)
img1 = img0[:h,:w].copy()
img2 = img1.copy()

assert np.all(np.logical_and(img1==img2,img2==img0[:h,:w]))
f(img1)    # ~44 seconds
g(img2)    # ~1.2 seconds
assert(np.all(img1==img2))
if not np.all(img2==img0[:h,:w]):
    pass
else:
    raise Exception('did not change')

indicesindex array。它是数组的元组,每个维一个。 indices[0][0],indices[1][0],indices[2][0]将是3d数组中一个 的索引。完整的元组可用于索引数组的多个元素。

>>> indices
(array([1, 0, 2]), array([1, 0, 0]), array([1, 1, 1]))
>>> list(zip(*indices))
[(1, 1, 1), (0, 0, 1), (2, 0, 1)]
>>> arr = np.arange(27).reshape(3,3,3)
>>> arr[1,1,1], arr[0,0,1],arr[2,0,2]
(13, 1, 20)
>>> arr[indices]
array([13,  1, 19])

# arr[indices] <-> np.array([arr[1,1,1],arr[0,0,1],arr[2,0,1]])

np.logical_and(patches > lower, patches < upper)返回一个布尔数组,而nonzero()返回所有值为True的元素的索引。