我试图删除一些图像的背景,调整一些值并使用像morphologyEx
这样的方法给我一个可以得到的结果,但是仍然存在一些漏洞,在最后一种情况下,这些漏洞甚至没有填充迭代每个轮廓并用-1
绘制它。我可以看到阈值图像非常好,整个形状都有线条,但我不知道如何继续......
更新 我已经改变了我的代码,所以我得到了更好的结果,但我仍然有一些漏洞......如果我能填补这些漏洞,那么剧本就会很完美。
def get_contrasted(image, type="dark", level=3):
maxIntensity = 255.0 # depends on dtype of image data
phi = 1
theta = 1
if type == "light":
newImage0 = (maxIntensity/phi)*(image/(maxIntensity/theta))**0.5
newImage0 = array(newImage0,dtype=uint8)
return newImage0
elif type == "dark":
newImage1 = (maxIntensity/phi)*(image/(maxIntensity/theta))**level
newImage1 = array(newImage1,dtype=uint8)
return newImage1
def sharp(image, level=3):
f = cv2.GaussianBlur(image, (level,level), level)
f = cv2.addWeighted(image, 1.5, f, -0.5, 0)
return f
original_image = imread('imagen.jpg')
# 1 Convert to gray & Normalize
gray_img = cv2.cvtColor(original_image, cv2.COLOR_BGR2GRAY)
gray_img = sharp(get_contrasted(gray_img))
gray_img = normalize(gray_img, None, 0, 255, NORM_MINMAX, CV_8UC1)
imshow("Gray", gray_img)
# 2 Find Threshold
gray_blur = cv2.GaussianBlur(gray_img, (7, 7), 0)
adapt_thresh_im = cv2.adaptiveThreshold(gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 1)
max_thresh, thresh_im = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)
thresh = cv2.bitwise_or(adapt_thresh_im, thresh_im)
# 3 Dilate
gray = cv2.Canny(thresh, 88, 400, apertureSize=3)
gray = cv2.dilate(gray, None, iterations=8)
gray = cv2.erode(gray, None, iterations=8)
imshow("Trheshold", gray)
# 4 Flood
contours, _ = cv2.findContours(gray, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour_info = []
for c in contours:
contour_info.append((
c,
cv2.isContourConvex(c),
cv2.contourArea(c),
))
contour_info = sorted(contour_info, key=lambda c: c[2], reverse=True)
max_contour = contour_info[0]
holes = np.zeros(gray_img.shape, np.uint8)
drawContours(holes, max_contour, 0, 255, -1)
imshow("Holes", holes)
mask = cv2.GaussianBlur(holes, (15, 15), 0)
mask = np.dstack([mask] * 3) # Create 3-channel alpha mask
mask = mask.astype('float32') / 255.0 # Use float matrices,
img = original_image.astype('float32') / 255.0 # for easy blending
masked = (mask * img) + ((1 - mask) * (0,0,1)) # Blend
masked = (masked * 255).astype('uint8')
imshow("Maked", masked)
waitKey()
答案 0 :(得分:8)
使用增大的内核迭代地执行孔图像的形态学关闭。但是,在这之前我建议你调整孔图像的大小(使用最近邻插值),这样你就不必使用巨大的内核。在下面的代码(C ++)中,我将孔图像的大小调整为其原始尺寸的25%。
要减少对边框的影响,请在应用迭代关闭之前使用 copyMakeBorder 添加一个恒定的零边框。由于我们在此处使用了15次迭代,因此请使图像周围的边框大于15。
所以步骤是
代码是用C ++编写的。我对python不是很熟悉。
treeView
输出(不是原始比例)看起来很好,只是它将底部的背景区域作为前景。
答案 1 :(得分:6)
当我解决同样的问题,并在Python中找到了一个解决方案(使用opencv2)时,想到这里也只是分享它。希望它有所帮助。
import numpy as np
import cv2
cv2.namedWindow('image', cv2.WINDOW_NORMAL)
#Load the Image
imgo = cv2.imread('koAl2.jpg')
height, width = imgo.shape[:2]
#Create a mask holder
mask = np.zeros(imgo.shape[:2],np.uint8)
#Grab Cut the object
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
#Hard Coding the Rect The object must lie within this rect.
rect = (10,10,width-30,height-30)
cv2.grabCut(imgo,mask,rect,bgdModel,fgdModel,5,cv2.GC_INIT_WITH_RECT)
mask = np.where((mask==2)|(mask==0),0,1).astype('uint8')
img1 = imgo*mask[:,:,np.newaxis]
#Get the background
background = imgo - img1
#Change all pixels in the background that are not black to white
background[np.where((background > [0,0,0]).all(axis = 2))] = [255,255,255]
#Add the background and the image
final = background + img1
#To be done - Smoothening the edges
cv2.imshow('image', final )
k = cv2.waitKey(0)
if k==27:
cv2.destroyAllWindows()
答案 2 :(得分:4)
@ dhanushka的方法运行正常。这是我的pythonic版本:
def get_holes(image, thresh):
gray = cv.cvtColor(image, cv.COLOR_BGR2GRAY)
im_bw = cv.threshold(gray, thresh, 255, cv.THRESH_BINARY)[1]
im_bw_inv = cv.bitwise_not(im_bw)
contour, _ = cv.findContours(im_bw_inv, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv.drawContours(im_bw_inv, [cnt], 0, 255, -1)
nt = cv.bitwise_not(im_bw)
im_bw_inv = cv.bitwise_or(im_bw_inv, nt)
return im_bw_inv
def remove_background(image, thresh, scale_factor=.25, kernel_range=range(1, 15), border=None):
border = border or kernel_range[-1]
holes = get_holes(image, thresh)
small = cv.resize(holes, None, fx=scale_factor, fy=scale_factor)
bordered = cv.copyMakeBorder(small, border, border, border, border, cv.BORDER_CONSTANT)
for i in kernel_range:
kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (2*i+1, 2*i+1))
bordered = cv.morphologyEx(bordered, cv.MORPH_CLOSE, kernel)
unbordered = bordered[border: -border, border: -border]
mask = cv.resize(unbordered, (image.shape[1], image.shape[0]))
fg = cv.bitwise_and(image, image, mask=mask)
return fg
img = cv.imread('koAl2.jpg')
nb_img = remove_background(img, 230)
答案 3 :(得分:2)
@grep,根据Alexander Lutsenko的帖子,对于python 3.6.3,要使代码正常工作,您需要向findContours()添加另一个返回值,如下所示:
contour, _ = cv.findContours(im_bw_inv, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
到
_, contour, _ = cv.findContours(im_bw_inv, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
答案 4 :(得分:0)
尝试这种形态学操作,用于扩展和消除C ++中的漏洞
Mat erodeElement = getStructuringElement(MORPH_RECT, Size(4, 4));
morphologyEx(thresh, thresh, MORPH_CLOSE ,erodeElement);
morphologyEx(thresh, thresh, MORPH_OPEN, erodeElement);
morphologyEx(thresh, thresh, MORPH_CLOSE, erodeElement);
morphologyEx(thresh, thresh, MORPH_OPEN, erodeElement);
morphologyEx(thresh, thresh, MORPH_OPEN, erodeElement);