Python在OS X中获取屏幕像素值

时间:2012-10-19 16:50:46

标签: python macos automation ui-automation

我正在使用OS X 10.8.2在Python中构建自动游戏机器人,并且在研究Python GUI自动化的过程中,我发现了autopy。鼠标操作API很棒,但似乎屏幕捕获方法依赖于弃用的OpenGL方法......

有没有有效的方法来获取OS X中像素的颜色值?我现在能想到的唯一方法就是使用os.system("screencapture foo.png"),但这个过程似乎有不必要的开销,因为我将很快进行轮询。

3 个答案:

答案 0 :(得分:18)

一项小改进,但使用screencapture的TIFF压缩选项要快一点:

$ time screencapture -t png /tmp/test.png
real        0m0.235s
user        0m0.191s
sys         0m0.016s
$ time screencapture -t tiff /tmp/test.tiff
real        0m0.079s
user        0m0.028s
sys         0m0.026s

这确实有很多开销,正如你所说的(子进程创建,从光盘写入/读取,压缩/解压缩)。

相反,您可以使用PyObjC使用CGWindowListCreateImage捕获屏幕。我发现它需要大约70毫秒(~14fps)才能捕获1680x1050像素的屏幕,并且可以在内存中访问这些值

一些随机的说明:

  • 导入Quartz.CoreGraphics模块是最慢的部分,大约1秒钟。导入大多数PyObjC模块也是如此。在这种情况下不太重要,但对于短期流程,您可能更好地在ObjC中编写工具
  • 指定较小的区域要快一点,但不是很大(100x100px块为~40ms,1680x1050为~70ms)。大部分时间似乎只花在CGDataProviderCopyData调用上 - 我想知道是否有办法直接访问数据,因为我们不需要修改它?
  • ScreenPixel.pixel函数非常快,但访问大量像素仍然很慢(因为0.01ms * 1650*1050约为17秒) - 如果您需要访问大量像素,可能会更快struct.unpack_from 1}}他们一气呵成。

以下是代码:

import time
import struct

import Quartz.CoreGraphics as CG


class ScreenPixel(object):
    """Captures the screen using CoreGraphics, and provides access to
    the pixel values.
    """

    def capture(self, region = None):
        """region should be a CGRect, something like:

        >>> import Quartz.CoreGraphics as CG
        >>> region = CG.CGRectMake(0, 0, 100, 100)
        >>> sp = ScreenPixel()
        >>> sp.capture(region=region)

        The default region is CG.CGRectInfinite (captures the full screen)
        """

        if region is None:
            region = CG.CGRectInfinite
        else:
            # TODO: Odd widths cause the image to warp. This is likely
            # caused by offset calculation in ScreenPixel.pixel, and
            # could could modified to allow odd-widths
            if region.size.width % 2 > 0:
                emsg = "Capture region width should be even (was %s)" % (
                    region.size.width)
                raise ValueError(emsg)

        # Create screenshot as CGImage
        image = CG.CGWindowListCreateImage(
            region,
            CG.kCGWindowListOptionOnScreenOnly,
            CG.kCGNullWindowID,
            CG.kCGWindowImageDefault)

        # Intermediate step, get pixel data as CGDataProvider
        prov = CG.CGImageGetDataProvider(image)

        # Copy data out of CGDataProvider, becomes string of bytes
        self._data = CG.CGDataProviderCopyData(prov)

        # Get width/height of image
        self.width = CG.CGImageGetWidth(image)
        self.height = CG.CGImageGetHeight(image)

    def pixel(self, x, y):
        """Get pixel value at given (x,y) screen coordinates

        Must call capture first.
        """

        # Pixel data is unsigned char (8bit unsigned integer),
        # and there are for (blue,green,red,alpha)
        data_format = "BBBB"

        # Calculate offset, based on
        # http://www.markj.net/iphone-uiimage-pixel-color/
        offset = 4 * ((self.width*int(round(y))) + int(round(x)))

        # Unpack data from string into Python'y integers
        b, g, r, a = struct.unpack_from(data_format, self._data, offset=offset)

        # Return BGRA as RGBA
        return (r, g, b, a)


if __name__ == '__main__':
    # Timer helper-function
    import contextlib

    @contextlib.contextmanager
    def timer(msg):
        start = time.time()
        yield
        end = time.time()
        print "%s: %.02fms" % (msg, (end-start)*1000)


    # Example usage
    sp = ScreenPixel()

    with timer("Capture"):
        # Take screenshot (takes about 70ms for me)
        sp.capture()

    with timer("Query"):
        # Get pixel value (takes about 0.01ms)
        print sp.width, sp.height
        print sp.pixel(0, 0)


    # To verify screen-cap code is correct, save all pixels to PNG,
    # using http://the.taoofmac.com/space/projects/PNGCanvas

    from pngcanvas import PNGCanvas
    c = PNGCanvas(sp.width, sp.height)
    for x in range(sp.width):
        for y in range(sp.height):
            c.point(x, y, color = sp.pixel(x, y))

    with open("test.png", "wb") as f:
        f.write(c.dump())

答案 1 :(得分:1)

我在搜索解决方案时遇到了这篇文章,以便在Mac OS X中获取用于实时处理的屏幕截图。我已经尝试过使用PIL中的ImageGrab,如其他一些帖子中所建议的但是无法快速获取数据(只有大约0.5 fps)。

这篇文章中使用PyObjC的答案https://stackoverflow.com/a/13024603/3322123节省了我的一天!谢谢@dbr!

但是,我的任务需要获取所有像素值而不是单个像素,并且还要通过@dbr评论第三个音符,我在此类中添加了一个新方法来获取完整图像,以防其他人可能需要它。

图像数据以numpy数组的形式返回,其大小为(height,width,3),可以直接用于numpy或opencv等的后期处理...从numpy中获取单个像素值也变得非常简单索引。

我使用1600 x 1000屏幕截图测试代码 - 使用capture()获取数据需要大约30 ms并将其转换为np数组getimage()在我的Macbook上只需要约50 ms。所以现在我有> 10 fps,对于较小的区域甚至更快。

import numpy as np

def getimage(self):
    imgdata=np.fromstring(self._data,dtype=np.uint8).reshape(len(self._data)/4,4)
    return imgdata[:self.width*self.height,:-1].reshape(self.height,self.width,3)

注意我从BGRA 4频道丢弃“alpha”频道。

答案 2 :(得分:0)

这一切都非常有帮助,我不得不再次发表评论/但是我没有声誉。.但是,我确实有一个示例代码,其中包含上述答案的组合,可实现闪电般的快速屏幕捕获/感谢@dbr和@qqg!

getQuizContents()