我尝试将pytesseract与PIL结合使用,以从车牌图像中识别出车辆注册号。但是无法从这些图像中获取文字。
代码:
from PIL import Image
from pytesseract import image_to_string
img= Image.open('D://carimage1')
text = image_to_string(img)
print(text)
虽然此功能适用于普通扫描的文档,但不适用于车辆牌照。
示例图片1
示例图片2
答案 0 :(得分:0)
答案 1 :(得分:0)
这仅适用于second image:
from PIL import Image, ImageFilter
import pytesseract
img = Image.open('TcjXJ.jpg')
img2 = img.filter(ImageFilter.BLUR)
pixels = img2.load()
width, height = img2.size
x_ = []
y_ = []
for x in range(width):
for y in range(height):
if pixels[x, y] == (255, 255, 255):
x_.append(x)
y_.append(y)
img = img.crop((min(x_), min(y_), max(x_), max(y_)))
text = pytesseract.image_to_string(img, lang='eng', config='-c tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789')
print(text)
您已获得输出:
TN 99 F 2378
答案 2 :(得分:0)
关于如何解决问题,这里有个大概的想法。您可以在此基础上构建。您需要从图像中提取车牌,然后将图像发送到tesseract。阅读代码注释以了解我要做什么。
import numpy as np
import cv2
import pytesseract
import matplotlib.pyplot as plt
img = cv2.imread('/home/muthu/Documents/3r9OQ.jpg')
#convert my image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#perform adaptive threshold so that I can extract proper contours from the image
#need this to extract the name plate from the image.
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
contours,h = cv2.findContours(thresh,1,2)
#once I have the contours list, i need to find the contours which form rectangles.
#the contours can be approximated to minimum polygons, polygons of size 4 are probably rectangles
largest_rectangle = [0,0]
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.01*cv2.arcLength(cnt,True),True)
if len(approx)==4: #polygons with 4 points is what I need.
area = cv2.contourArea(cnt)
if area > largest_rectangle[0]:
#find the polygon which has the largest size.
largest_rectangle = [cv2.contourArea(cnt), cnt, approx]
x,y,w,h = cv2.boundingRect(largest_rectangle[1])
#crop the rectangle to get the number plate.
roi=img[y:y+h,x:x+w]
#cv2.drawContours(img,[largest_rectangle[1]],0,(0,0,255),-1)
plt.imshow(roi, cmap = 'gray')
plt.show()
输出是如下所示的车牌号:
现在将经过裁剪的图像传递到您的tesseract中。
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY,11,2)
text = pytesseract.image_to_string(roi)
print text
我得到您共享的示例图像的以下输出。
如果将透视图将车牌号图像转换为边框矩形并删除周围多余的边框,解析将更加准确。让我知道您是否也需要帮助。
如果按原样使用,上面的代码不适用于第二张图像,因为我将搜索过滤到具有4个边的多边形。希望你有主意。