我的任务是编写一个程序来检测和区分三个"目标"从我的火箭俱乐部的高度。这些目标是3个大型防水布,我有RGB值。
当我开始这个项目时,我使用tarps的确切RGB值覆盖了一个包含3个矩形的GoogleEarth图像,我的代码完美无缺。然而,当我实际收到防水布并开始在地面拍摄它们时,我的代码无法识别我规定的RGB颜色边界的防水布。
我试图将图像转换为HSV色彩空间,但我无法让它工作。我也考虑过使用轮廓 - 试图让程序识别出绑定每个目标的4条直线。问题是这些图像将在室外拍摄,因此我无法控制环境照明条件。
有没有人有任何关于什么颜色空间或计算机视觉方法可以让我识别和区分这些目标的想法,无论室外照明如何?
以下是代码:
import cv2
import numpy as np
image = cv2.imread('2000 ft.png', 1)
#hsv_img = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
#cv2.waitKey(0)
cv2.destroyAllWindows()
# define target strings
targ = ['Target 1 - Blue', 'Target 2 - Yellow', 'Target 3 - Red']
i = 0
# BGR boundaries of colors
boundaries = [
# 0, 32, 91
([40, 10, 0], [160, 60, 20]),
# 255, 209, 0
([0, 180, 220], [20, 230, 255]),
# 166, 9, 61
([40, 0, 150], [80, 30, 185]),
]
# colors for rectangle outlines
colors = [
([91, 32, 0]), ([0, 209, 255]), ([61, 9, 166])
]
# # loop over the boundaries
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint16")
upper = np.array(upper, dtype = "uint16")
# find the colors within the specified boundaries and apply
# the mask
mask = cv2.inRange(image, lower, upper)
output = cv2.bitwise_and(image, image, mask = mask)
# frame threshold
frame_threshed = cv2.inRange(image, lower, upper)
imgray = frame_threshed
# iteratively view masks
cv2.imshow('imgray',imgray)
cv2.waitKey(0)
cv2.destroyAllWindows()
ret,thresh = cv2.threshold(frame_threshed,127,255,0)
contours, hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# Find the index of the largest contour
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cont=contours[max_index]
# putting text and outline rectangles on image
x,y,w,h = cv2.boundingRect(cont)
cv2.rectangle(image,(x,y),(x+w,y+h),colors[i],2)
cv2.putText(image, targ[i], (x-50, y-10), cv2.FONT_HERSHEY_PLAIN, 0.85, (0, 255, 0))
# cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),4)
cv2.imshow("Show",image)
cv2.waitKey()
cv2.destroyAllWindows()
i += 1
cv2.destroyAllWindows()
我在Python中编写这段代码,我有很多经验,使用OpenCV库,我没有太多的经验。任何帮助都将不胜感激!
答案 0 :(得分:0)
来自相机(实际上来自火箭)的颜色将取决于环境光线,并且不太可能
# colors for rectangle outlines
colors = [
([91, 32, 0]), ([0, 209, 255]), ([61, 9, 166])
]
您可以通过计算图像的像素坐标来检查,并执行打印值。
如果你仍然可以改变标记,我会使用对比度更高的标记,而不是单色(例如带有白色边框的蓝色正方形等),这将有助于canny> findContour,以及随后的近似找到正方形。
如果改变标记是不行的,那么最好的选择是将R,G和B通道分开,然后执行canny> findcontour。我怀疑红色和蓝色方块会很顺利,但黄色方块会很差,因为它融入了景观。
答案 1 :(得分:0)
我已将代码移至python3 / OpenCV3,否则它基于您的代码
@pandamakes在他的主张中是正确的。您需要查找接近目标值的像素,但不能假设您将获得非常接近该值的值。我添加了一个用于忽略边界的遮罩(你周围有很多伪影)我修改了目标值,因为在现实生活中你不太可能得到零值的像素(特别是在你有大气反射的航拍图像中) )
基本上我正在寻找一个价值接近目标值的区域,并使用洪水填充来定位实际的目标边界
编辑从RGB色彩空间移动到CIELab并仅使用ab色彩通道增强对光照条件的稳健性
import cv2
import numpy as np
image = cv2.imread('tarpsB.jpg', 1)
#convert to CIELab
cielab = cv2.cvtColor(image, cv2.COLOR_BGR2Lab)
# define target strings
targ = ['Target 1 - Blue', 'Target 2 - Yellow', 'Target 3 - Red']
i = 0
# colors = [
# ([91, 40, 40]), ([40, 209, 255]), ([81, 60, 166])
# ]
# rough conversion of BGR target values to CIELab
cielab_colors = [
([20, 20, -40]), ([80, 0, 90]), ([40, 70, 30])
]
# # loop over the boundaries
height = image.shape[0]
width = image.shape[1]
mask = np.ones(image.shape[0:2])
cv2.circle( mask, (int(width/2), int(height/2)), int(height/2), 0, -1 );
mask = 1-mask
mask = mask.astype('uint8')
#for color in colors:
for cielab_color in cielab_colors:
diff_img = cielab.astype(float)
# find the colors within the specified boundaries and apply
# the mask
diff_img[:, :, 0] = np.absolute( diff_img[:, :, 0] - 255 * cielab_color[0] / 100 )
diff_img[:, :, 1] = np.absolute( diff_img[:, :, 1] - (cielab_color[1] + 128) )
diff_img[:, :, 2] = np.absolute( diff_img[:, :, 2] - (cielab_color[2] + 128) )
diff_img = ( diff_img[:, :, 1] + diff_img[:, :, 2]) / 2
diff_img = cv2.GaussianBlur(diff_img, (19, 19), 0)
minVal, maxVal, minLoc, maxLoc = cv2.minMaxLoc(diff_img, mask)
min_img = np.array(diff_img/255)
ff_mask = np.zeros( (height + 2, width + 2), np.uint8)
cv2.floodFill(image, ff_mask, minLoc, 255, (12, 12, 12), (12, 12, 12), cv2.FLOODFILL_MASK_ONLY );
ff_mask = ff_mask[1:-1, 1:-1]
im2, contours, hierarchy = cv2.findContours(ff_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Find the index of the largest contour
areas = [cv2.contourArea(c) for c in contours]
max_index = np.argmax(areas)
cont=contours[max_index]
print('target color = {}'.format(image[minLoc[1], minLoc[0], :]))
# putting text and outline rectangles on image
x,y,w,h = cv2.boundingRect(cont)
cv2.rectangle(image,(x,y),(x+w,y+h),colors[i],2)
cv2.putText(image, targ[i], (x-50, y-10), cv2.FONT_HERSHEY_PLAIN, 0.85, (0, 255, 0))
cv2.imshow('diff1D',diff_img/255)
cv2.imshow('ff_mask',ff_mask*255)
cv2.waitKey(0)
i += 1
cv2.imshow("Show",image)
cv2.waitKey(0)
cv2.destroyAllWindows()
修改添加输出图片