我要做的是用pyautogui制作一个数字的屏幕截图,然后用pytesseract将数字转换为字符串。代码: 进口pyautogui 进口时间 进口PIL 来自PIL导入图片 import pytesseract
pytesseract.pytesseract.tesseract_cmd = 'C://Program Files (x86)//Tesseract-OCR//tesseract'
# Create image
time.sleep(5)
image = pyautogui.screenshot('projects/output.png', region=(1608, 314, 57, 41))
# Resize image
basewidth = 2000
img = Image.open('projects/output.png')
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize), PIL.Image.ANTIALIAS)
img.save('projects/output.png')
col = Image.open('projects/output.png')
gray = col.convert('L')
bw = gray.point(lambda x: 0 if x<128 else 255, '1')
bw.save('projects/output.png')
# Image to string
screen = Image.open('projects/output.png')
print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))
现在似乎pytesseract并不接受pyautogui创建的屏幕截图。代码运行正常没有问题,但打印一个空字符串。但是,如果我在绘画中创建一个图像,并将其保存为&#39; output.png&#39;到正确的文件夹,就像截图所做的那样,它确实有效。
Image output after resize and adjustments
任何人都知道我错过了什么?
答案 0 :(得分:1)
修改路径并尝试以下操作:
import numpy as np
from numpy import *
from PIL import Image
from PIL import *
import pytesseract
import cv2
src_path = "C:\\Users\\USERNAME\\Documents\\OCR\\"
def get_region(box):
#Grabs the region of the box coordinates
im = ImageGrab.grab(box)
#Change size of image to 200% of the original size
a, b, c, d = box
doubleX = (c - a) * 2
doubleY = (d - b) * 2
im.resize((doubleX, doubleY)).save(os.getcwd() + "\\test.png", 'PNG')
def get_string(img_path):
# Read image with opencv
img = cv2.imread(img_path)
# Convert to gray
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply dilation and erosion to remove some noise
kernel = np.ones((1, 1), np.uint8)
img = cv2.dilate(img, kernel, iterations=1)
img = cv2.erode(img, kernel, iterations=1)
# Write image after removed noise
cv2.imwrite(src_path + "removed_noise.png", img)
# Apply threshold to get image with only black and white
#img = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 31, 2)
# Write the image after apply opencv to do some ...
cv2.imwrite(src_path + "thres.png", img)
# Recognize text with tesseract for python
result = pytesseract.image_to_string(Image.open(src_path + "thres.png"))
return result
def main():
#Grab the region of the screenshot (box area)
region = (1354,630,1433,648)
get_region(region)
#Output results
print ("OCR Output: ")
print (get_string(src_path + "test.png"))
答案 1 :(得分:0)
将其转换为numpy数组,pytesseract接受这些。
import numpy as np
import pyautogui
img=np.array(pyautogui.screenshot())
print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))
或者,我建议为屏幕截图使用“ mss”,因为它们要快得多。
import mss
with mss.mss() as sct:
img = np.array(sct.grab(sct.monitors[1]))
print(pytesseract.image_to_string(screen, config='tessedit_char_whitelist=0123456789'))