如何异步捕获iOS(cocos2d)中的屏幕截图?

时间:2012-08-18 10:02:47

标签: objective-c ios cocos2d-iphone

我有一个工作的截图代码,我从这里得到(stackoverflow)。问题是我无法异步进行。我的屏幕全是黑色。

这可能吗?如果是,我将如何做?

这是我正在使用的代码

CGSize winSize = [CCDirector sharedDirector].winSize;
int screenScale = [UIScreen mainScreen].scale;
int width = winSize.width * screenScale;
int height = winSize.height * screenScale;

NSInteger myDataLength = width * height * 4;

// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
    for(int x = 0; x < width * 4; x++)
    {
        buffer2[(height-(1*screenScale) - y) * width * 4 + x] = buffer[y * 4 * width + x];
    }
}

// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);

// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];

2 个答案:

答案 0 :(得分:3)

以下是您可以尝试在单独的线程上执行某些操作的建议,但您仍需要从主线程获取OpenGL像素。 我的建议是将像素保存到主线程的缓冲区,然后安排后台任务以使用该缓冲区创建UIImage。

- (void) takeScreenshot 
{
CGSize winSize = [CCDirector sharedDirector].winSize;
int screenScale = [UIScreen mainScreen].scale;
int width = winSize.width * screenScale;
int height = winSize.height * screenScale;

NSInteger myDataLength = width * height * 4;

// allocate array and read pixels into it.
buffer = (GLubyte *) malloc(myDataLength); // NOW PART OF THE CLASS DEFINITION
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
[self performSelectorInBackground:@selector(finishScreenshot) withObject:nil];
}

不是将缓冲区数组保存为类变量,而是实际上可以在performSelectorInBackground调用上将缓冲区作为NSData发送,然后在finishScreenshot上添加参数。

-(void)finishScreenshot{

CGSize winSize = [CCDirector sharedDirector].winSize;
int screenScale = [UIScreen mainScreen].scale;
int width = winSize.width * screenScale;
int height = winSize.height * screenScale;
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
    for(int x = 0; x < width * 4; x++)
    {
        buffer2[(height-(1*screenScale) - y) * width * 4 + x] = buffer[y * 4 * width + x];
    }
}

// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);

// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];

}

正如我所说的,我不确定这种可行性,但这应该有效。尝试这样做并检查它是否真的值得努力表现。

答案 1 :(得分:2)

这是不可能的。

只能从主线程修改和访问OpenGL视图,因此任何尝试在另一个线程中获取OpenGL帧缓冲区的内容都将失败。