移除扫描/拍摄的医疗文件中的网格

时间:2019-07-09 12:50:33

标签: python opencv image-segmentation

我是一名牙科学生,目前正在尝试编写脚本来分析和从牙科记录中提取手写数字。我已经完成了该脚本的粗略版本,但是识别率非常低。分析数据时遇到的一个大问题是网格很难被删除。

我要分析的扫描表单(白色字段用于匿名):

空表格:

我已经针对此问题尝试了不同的解决方案(侵蚀/膨胀,HoughLineTransform和线的悬浮)。 目前,使用特征匹配和减去空模板可以达到最佳效果。

结果:

腐蚀并扩张该图像可获得更好的结果

结果:
![] [4]

但是,几乎我每次尝试都需要重新校准。 您知道我的问题的更优雅的解决方案吗? SURF匹配可以带来更好的结果吗? 非常感谢你!

到目前为止,这是我的代码:

GOOD_MATCH_PERCENT = 0.15

def match_img_to_template(input_img, template_img, MAX_FEATURES, GOOD_MATCH_PERCENT):

    # blurring of the input image
    template_img = cv2.GaussianBlur(template_img, (3, 3), cv2.BORDER_DEFAULT)

    # equalizing the histogramm of the input image
    img_preprocessed = cv2.equalizeHist(input_img)

    # ORB Detector
    orb = cv2.ORB_create(MAX_FEATURES)
    kp1, des1 = orb.detectAndCompute(img_preprocessed, None)
    kp2, des2 = orb.detectAndCompute(template_img, None)

    # Brute Force Matching
    matcher= cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
    matches = matcher.match(des1, des2, None)

    matches.sort(key=lambda x:x.distance, reverse=False)

    numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
    matches = matches[:numGoodMatches]

    # Remove not so good matches
    numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
    matches = matches[:numGoodMatches]

    # Extract location of good matches
    points1 = np.zeros((len(matches), 2), dtype=np.float32)
    points2 = np.zeros((len(matches), 2), dtype=np.float32)

    for i, match in enumerate(matches):
        points1[i, :] = kp1[match.queryIdx].pt
        points2[i, :] = kp2[match.trainIdx].pt

    # Find homography
    h, mask = cv2.findHomography(points1, points2, cv2.RANSAC)

    # Use homography
    height, width = template_img.shape
    input_warped = cv2.warpPerspective(input_img, h, (width, height))

    ret1, input_warped_thresh = cv2.threshold(input_warped,0,255,cv2.THRESH_BINARY_INV+cv2.THRESH_OTSU)

    diff = cv2.absdiff(template_img, input_warped_thresh)

    ret, diff = cv2.threshold(diff, 20, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C + cv2.THRESH_BINARY)

    diff = cv2.equalizeHist(diff)

    # Create kernels
    kernel1 = np.ones((3,3),np.uint8)
    kernel2 = np.ones((6,6), np.uint8)
    # erode dilate to remove the grid
    diff_erode = cv2.erode(diff,kernel1)
    diff_dilated = cv2.dilate(diff_erode,kernel2)
    # invert diff_dilate
    diff_dilated_inv = cv2.bitwise_not(diff_dilated)

    return diff_dilated_inv

0 个答案:

没有答案