我尝试过使用pyautogui模块和我在屏幕上定位图像的功能
pyautogui.locateOnScreen()
但它的处理时间约为5-10秒。我还有其他方法可以更快地在屏幕上找到图像吗?基本上,我想要一个更快版本的locateOnScreen()。
答案 0 :(得分:6)
official documentation说它在1920x1080的屏幕上需要1-2秒,所以你的时间似乎有点慢。我会尝试优化:
grayscale=True
应该提供30% - 加速)以上链接的文档中对此进行了描述。
这仍然不够快,你可以检查sources of pyautogui,看看屏幕上的locate使用Python实现的特定算法(Knuth-Morris-Pratt搜索算法)。因此,在C中实现此部分可能会导致非常明显的加速。
答案 1 :(得分:0)
如果您正在寻找图像识别,可以使用Sikuli。查看Hello World tutorial。
答案 2 :(得分:0)
创建函数并使用线程置信度(需要opencv)
import pyautogui
import threading
def locate_cat():
cat=None
while cat is None:
cat = pyautogui.locateOnScreen('Pictures/cat.png',confidence=.65,region=(1722,748, 200,450)
return cat
如果知道屏幕位置的粗略位置,则可以使用region参数
在某些情况下,您可以在屏幕上定位并将区域分配给变量,并使用region = somevar作为参数,因此它看起来与上次找到的位置相同,以帮助加快检测过程。 / p>
例如:
import pyautogui
def first_find():
front_door = None
while front_door is None:
front_door_save=pyautogui.locateOnScreen('frontdoor.png',confidence=.95,region=1722,748, 200,450)
front_door=front_door_save
return front_door_save
def second_find():
front_door=None
while front_door is None:
front_door = pyautogui.locateOnScreen('frontdoor.png',confidence=.95,region=front_door_save)
return front_door
def find_person():
person=None
while person is None:
person= pyautogui.locateOnScreen('person.png',confidence=.95,region=front_door)
while True:
first_find()
second_find()
if front_door is None:
pass
if front_door is not None:
find_person()
答案 3 :(得分:0)
我在使用 pyautogui 时遇到了同样的问题。虽然它是一个非常方便的库,但速度很慢。
我依靠 cv2 和 PIL 获得了 x10 的加速:
def benchmark_opencv_pil(method):
img = ImageGrab.grab(bbox=REGION)
img_cv = cv.cvtColor(np.array(img), cv.COLOR_RGB2BGR)
res = cv.matchTemplate(img_cv, GAME_OVER_PICTURE_CV, method)
# print(res)
return (res >= 0.8).any()
使用 TM_CCOEFF_NORMED 的地方效果很好。 (显然也可以调整0.8的阈值)
来源:Fast locateOnScreen with Python
为了完整起见,这里是完整的基准:
import pyautogui as pg
import numpy as np
import cv2 as cv
from PIL import ImageGrab, Image
import time
REGION = (0, 0, 400, 400)
GAME_OVER_PICTURE_PIL = Image.open("./balloon_fight_game_over.png")
GAME_OVER_PICTURE_CV = cv.imread('./balloon_fight_game_over.png')
def timing(f):
def wrap(*args, **kwargs):
time1 = time.time()
ret = f(*args, **kwargs)
time2 = time.time()
print('{:s} function took {:.3f} ms'.format(
f.__name__, (time2-time1)*1000.0))
return ret
return wrap
@timing
def benchmark_pyautogui():
res = pg.locateOnScreen(GAME_OVER_PICTURE_PIL,
grayscale=True, # should provied a speed up
confidence=0.8,
region=REGION)
return res is not None
@timing
def benchmark_opencv_pil(method):
img = ImageGrab.grab(bbox=REGION)
img_cv = cv.cvtColor(np.array(img), cv.COLOR_RGB2BGR)
res = cv.matchTemplate(img_cv, GAME_OVER_PICTURE_CV, method)
# print(res)
return (res >= 0.8).any()
if __name__ == "__main__":
im_pyautogui = benchmark_pyautogui()
print(im_pyautogui)
methods = ['cv.TM_CCOEFF', 'cv.TM_CCOEFF_NORMED', 'cv.TM_CCORR',
'cv.TM_CCORR_NORMED', 'cv.TM_SQDIFF', 'cv.TM_SQDIFF_NORMED']
# cv.TM_CCOEFF_NORMED actually seems to be the most relevant method
for method in methods:
print(method)
im_opencv = benchmark_opencv_pil(eval(method))
print(im_opencv)
结果显示提高了 10 倍。
benchmark_pyautogui function took 175.712 ms
False
cv.TM_CCOEFF
benchmark_opencv_pil function took 21.283 ms
True
cv.TM_CCOEFF_NORMED
benchmark_opencv_pil function took 23.377 ms
False
cv.TM_CCORR
benchmark_opencv_pil function took 20.465 ms
True
cv.TM_CCORR_NORMED
benchmark_opencv_pil function took 25.347 ms
False
cv.TM_SQDIFF
benchmark_opencv_pil function took 23.799 ms
True
cv.TM_SQDIFF_NORMED
benchmark_opencv_pil function took 22.882 ms
True