我正在尝试检测皮肤。我找到了一个很好的简单公式来从RGB图片中检测皮肤。唯一的问题是,for循环非常慢,我需要加快过程。我已经进行了一些研究,向量化可以固定for循环,但是我不知道该如何在我的情况下使用它。
这是我的功能代码:
函数接收1个类型为numpy的参数,形状为(144x256x3),dtype = np.uint8
函数返回检测到的第一个皮肤彩色像素的坐标(如numpy.array [height,width]);第一张皮肤检测到的图片(浮动)的皮肤检测像素数(整数)和计算角度(从左到右)
# picture = npumpy array, with 144x256x3 shape, dtype=np.uint8
def filter_image(picture):
r = 0.0
g = 0.0
b = 0.0
# In first_point I save first occurrence of skin colored pixel, so I can track person movement
first_point = np.array([-1,-1])
# counter is used to count how many skin colored pixels are in an image (to determine distance to target, because LIDAR isn't working)
counter = 0
# angle of first pixel with skin color (from left to right, calculated with Horizontal FOV)
angle = 0.0
H = picture.shape[0]
W = picture.shape[1]
# loop through each pixel
for i in range(H):
for j in range(W):
# if all RGB are 0(black), we take with next pixel
if(int(picture[i,j][0]+picture[i,j][1]+picture[i,j][2])) == 0:
continue
#else we calculate r,g,b used for skin recognition
else:
r = picture[i,j][0]/(int(picture[i,j][0]+picture[i,j][1]+picture[i,j][2]))
g = picture[i,j][1]/(int(picture[i,j][0]+picture[i,j][1]+picture[i,j][2]))
b = picture[i,j][2]/(int(picture[i,j][0]+picture[i,j][1]+picture[i,j][2]))
# if one of r,g,b calculations are 0, we take next pixel
if(g == 0 or r == 0 or b == 0):
continue
# if True, pixel is skin colored
elif(r/g > 1.185 and (((r * b) / math.pow(r + b + g,2)) > 0.107) and ((r * g) / math.pow(r + b + g,2)) > 0.112):
# if this is the first point with skin colors in the whole image, we save i,j coordinate
if(first_point[0] == -1):
# save first skin color occurrence
first_point[0] = i
first_point[1] = j
# here angle is calculated, with width skin pixel coordinate, Hor. FOV of camera and constant
angle = (j+1)*91 *0.00390626
# whenever we detect skin colored pixel, we increment the counter value
counter += 1
continue
# funtion returns coordinates of first skin colored pixel, counter of skin colored pixels and calculated angle(from left to right based on j coordinate of first pixel with skin color)
return first_point,counter, angle
功能运行良好,唯一的问题是速度!
谢谢您的帮助!
答案 0 :(得分:2)
在尝试提高代码性能时,通常最想尝试的一件事就是看看像numba
这样的东西基本上可以免费地使其更快。
以下是如何在代码中使用它的示例:
import math
import time
# I'm just importing numpy here so I can make a random input of the
# same dimensions that you mention in your question.
import numpy as np
from numba import jit
@jit(nopython=True)
def filter_image(picture):
... I just copied the body of this function from your post above ...
return first_point, counter, angle
def main():
n_iterations = 10
img = np.random.rand(144, 256, 3)
before = time.time()
for _ in range(n_iterations):
# In Python 3, this was just a way I could get access to the original
# function you defined, without having to make a separate function for
# it (as the numba call replaces it with an optimized version).
# It's equivalent to just calling your original function here.
filter_image.__wrapped__(img)
print(f'took: {time.time() - before:.3f} without numba')
before = time.time()
for _ in range(n_iterations):
filter_image(img)
print(f'took: {time.time() - before:.3f} WITH numba')
if __name__ == '__main__':
main()
显示时差的输出:
took: 1.768 without numba
took: 0.414 WITH numba
...实际上优化此功能可能会做的更好,但是如果此加速足以使您无需进行其他优化,那就足够了!
编辑(根据宏观经济学家的评论):我在上面报告的时间还包括numba
即时编译函数的前期成本,这是在第一次调用时发生的。如果要对该函数进行多次调用,则性能差异实际上可能会大得多。在从第一个到第一个之间对所有呼叫进行计时应该可以使每次呼叫时间的比较更加准确。
答案 1 :(得分:2)
您可以跳过所有循环,并使用numpy的广播进行操作。如果将图像从3D整形为2D,则该过程将变得更加容易,从而使您可以处理HxW像素行。
def filter(picture):
H,W = picture.shape[0],picture.shape[1]
picture = picture.astype('float').reshape(-1,3)
# A pixel with any r,g,b equalling zero can be removed.
picture[np.prod(picture,axis=1)==0] = 0
# Divide non-zero pixels by their rgb sum
picsum = picture.sum(axis=1)
nz_idx = picsum!=0
picture[nz_idx] /= (picsum[nz_idx].reshape(-1,1))
nonzeros = picture[nz_idx]
# Condition 1: r/g > 1.185
C1 = (nonzeros[:,0]/nonzeros[:,1]) > 1.185
# Condition 2: r*b / (r+g+b)^2 > 0.107
C2 = (nonzeros[:,0]*nonzeros[:,2])/(nonzeros.sum(axis=1)**2) > 0.107
# Condition 3: r*g / (r+g+b)^2 > 0.112
C3 = (nonzeros[:,0]*nonzeros[:,1])/(nonzeros.sum(axis=1)**2) > 0.112
# Combine conditions
C = ((C1*C2*C3)!=0)
picsum[nz_idx] = C
skin_points = np.where(picsum!=0)[0]
first_point = np.unravel_index(skin_points[0],(H,W))
counter = len(skin_points)
angle = (first_point[1]+1) * 91 * 0.00390626
return first_point, counter, angle