使用OpenCV进行光学盲文识别

时间:2018-06-04 14:45:25

标签: java opencv image-processing ocr image-segmentation

我实际上是在尝试识别文档中的盲文字符。我打算将盲文文档转换成纯文本。 我正在使用OpenCV和Java来进行图像处理。

首先,我导入了盲文文档的图像:

Image of the original Braille document

然后,我进行了一些图像处理,以便对原始图像进行二值化。我已经读过重要的步骤:

  • 将图像转换为灰度级
  • 降低噪音
  • 增强边缘对比度
  • 二值化图像

以下是我使用的代码:

public static void main(String args[]) {

    Mat imgGrayscale = new Mat();

    Mat image = Imgcodecs.imread("C:/Users/original_braille.jpg", 1);  


    Imgproc.cvtColor(image, imgGrayscale, Imgproc.COLOR_BGR2GRAY);

    Imgproc.GaussianBlur(imgGrayscale, imgGrayscale, new Size(3, 3), 0);
    Imgproc.adaptiveThreshold(imgGrayscale, imgGrayscale, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 5, 4);

    Imgproc.medianBlur(imgGrayscale, imgGrayscale, 3);
    Imgproc.threshold(imgGrayscale, imgGrayscale, 0, 255, Imgproc.THRESH_OTSU);

    Imgproc.GaussianBlur(imgGrayscale, imgGrayscale, new Size(3, 3), 0);
    Imgproc.threshold(imgGrayscale, imgGrayscale, 0, 255, Imgproc.THRESH_OTSU);

    Imgcodecs.imwrite( "C:/Users/Jean-Baptiste/Desktop/Reconnaissance_de_formes/result.jpg", imgGrayscale );

}

我为此步骤获得了以下结果:

Image Binarization

据我所知,我们可以提高图像的质量以获得更好的效果,但我没有使用过不同的图像处理技术。我可以提高过滤器的质量吗?

之后,我想对图像进行分割,以便检测该文档的不同字符。我想分离文档的不同字符,以便将它们转换为文本。

例如,我手动绘制了文档的分隔线:

Separation lines

但我没有为这一步找到解决方案。是否有可能对OpenCV做同样的事情?

1 个答案:

答案 0 :(得分:0)

这是一个小脚本,用于查找图像中的线条。它在python中,我没有安装java版本的openCV,但我认为无论如何你都可以了解算法。

找到垂直线并不容易,因为点之间的空间取决于彼此相继的字母。您可以尝试使用一些常见字母的模板匹配算法。鉴于此时你知道字母的高度不应该太难。

当然,这整个方法假设文档没有旋转。

import numpy as np
import cv2

# This is just the transposition of your code in python
img      = cv2.imread('L1ZzA.jpg')
gray     = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur     = cv2.GaussianBlur(gray,(3,3),0)
thres    = cv2.adaptiveThreshold(blur,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,5,4)
blur2    = cv2.medianBlur(thres,3)
ret2,th2 = cv2.threshold(blur2,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
blur3    = cv2.GaussianBlur(th2,(3,3),0)
ret3,th3 = cv2.threshold(blur3,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Find connected components and extract the mean height and width
output = cv2.connectedComponentsWithStats(255-th3, 6, cv2.CV_8U)
mean_h = np.mean(output[2][:,cv2.CC_STAT_HEIGHT])
mean_w = np.mean(output[2][:,cv2.CC_STAT_WIDTH])

# Find empty rows, defined as having less than mean_h/2 pixels
empty_rows = []
for i in range(th3.shape[0]):
  if np.sum(255-th3[i,:]) < mean_h/2.0:
    empty_rows.append(i)           

# Group rows by labels
d = np.ediff1d(empty_rows, to_begin=1)

good_rows   = []
good_labels = []
label       = 0

# 1: assign labels to each row
# based on whether they are following each other or not (i.e. diff >1)
for i in range(1,len(empty_rows)-1):
  if d[i+1] == 1:
    good_labels.append(label)
    good_rows.append(empty_rows[i])

  elif d[i] > 1 and d[i+1] > 1:
    label = good_labels[len(good_labels)-1] + 1

# 2: find the mean row value associated with each label, and color that line in green in the original image
for i in range(label):
  frow = np.mean(np.asarray(good_rows)[np.where(np.asarray(good_labels) == i)])
  img[int(frow),:,1] = 255 

# Display the image with the green rows
cv2.imshow('test',img)
cv2.waitKey(0)