使用PIL修剪扫描图像?

时间:2011-09-23 12:18:08

标签: python image-processing python-imaging-library

修剪使用扫描仪输入的图像的方法是什么,因此具有较大的白/黑区域?

2 个答案:

答案 0 :(得分:4)

熵解决方案似乎存在问题且计算过于密集。为什么不边缘检测?

我刚刚写了这个python代码来为我自己解决同样的问题。我的背景是肮脏的白色,所以我使用的标准是黑暗和颜色。我通过为每个像素取最小的R,B或B值来简化这个标准,因此黑色或饱和的红色都突出相同。我还使用了每行或每列的许多最暗像素的平均值。然后我从每个边缘开始,一直工作直到我越过一个门槛。

这是我的代码:

#these values set how sensitive the bounding box detection is
threshold = 200     #the average of the darkest values must be _below_ this to count (0 is darkest, 255 is lightest)
obviousness = 50    #how many of the darkest pixels to include (1 would mean a single dark pixel triggers it)

from PIL import Image

def find_line(vals):
    #implement edge detection once, use many times 
    for i,tmp in enumerate(vals):
        tmp.sort()
        average = float(sum(tmp[:obviousness]))/len(tmp[:obviousness])
        if average <= threshold:
            return i
    return i    #i is left over from failed threshold finding, it is the bounds

def getbox(img):
    #get the bounding box of the interesting part of a PIL image object
    #this is done by getting the darekest of the R, G or B value of each pixel
    #and finding were the edge gest dark/colored enough
    #returns a tuple of (left,upper,right,lower)

    width, height = img.size    #for making a 2d array
    retval = [0,0,width,height] #values will be disposed of, but this is a black image's box 

    pixels = list(img.getdata())
    vals = []                   #store the value of the darkest color
    for pixel in pixels:
        vals.append(min(pixel)) #the darkest of the R,G or B values

    #make 2d array
    vals = np.array([vals[i * width:(i + 1) * width] for i in xrange(height)])

    #start with upper bounds
    forupper = vals.copy()
    retval[1] = find_line(forupper)

    #next, do lower bounds
    forlower = vals.copy()
    forlower = np.flipud(forlower)
    retval[3] = height - find_line(forlower)

    #left edge, same as before but roatate the data so left edge is top edge
    forleft = vals.copy()
    forleft = np.swapaxes(forleft,0,1)
    retval[0] = find_line(forleft)

    #and right edge is bottom edge of rotated array
    forright = vals.copy()
    forright = np.swapaxes(forright,0,1)
    forright = np.flipud(forright)
    retval[2] = width - find_line(forright)

    if retval[0] >= retval[2] or retval[1] >= retval[3]:
        print "error, bounding box is not legit"
        return None
    return tuple(retval)

if __name__ == '__main__':
    image = Image.open('cat.jpg')
    box = getbox(image)
    print "result is: ",box
    result = image.crop(box)
    result.show()

答案 1 :(得分:2)

首先,Here is a similar questionHere is a related questionAnd a another related question

这只是一个想法,当然还有其他方法。我会选择一个任意裁剪边,然后测量线两侧的entropy *,然后继续重新选择裁剪线(可能使用像二分法一样)直到裁剪出来的熵部分低于定义的阈值。我认为,您可能需要采用粗暴的寻根方法,因为您无法很好地指示何时裁剪得太少。然后重复剩余的3个边缘。

*我记得发现引用网站中的熵方法不完全准确,但我找不到我的笔记(不过我确定它在SO帖子中。)

编辑: 图像部分(除熵之外)的“空虚”的其他标准可能是边缘检测结果的对比度或对比度。