iPhone - 多个OpenGL(CAEaglelayer)视图的屏幕截图

时间:2012-05-14 06:30:13

标签: iphone objective-c opengl-es xcode4.2 eaglview

我正在使用GLPaint app参考的paint app。在这个应用程序中有两个画布视图,一个视图从左到右移动(动画),其他视图用作背景视图(如图所示)。

enter image description here

我使用CAEAGLLayer填充两个视图中的颜色(使用子类技术)。它按预期工作。现在我必须截取完整视图的截图(轮廓和两个OpenGL视图),但我只获得一个视图的截图(移动视图或背景视图)。与屏幕截图相关的代码与两个视图相关联,但一次只保存一个视图的内容。

屏幕截图的代码段如下。

- (UIImage*)snapshot:(UIView*)eaglview{

GLint backingWidth, backingHeight;

// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point, 
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.

glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);

// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);

// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast

CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
                                ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS

NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // Set the scale parameter to your OpenGL ES view's contentScaleFactor
    // so that you get a high-resolution snapshot when its value is greater than 1.0
    CGFloat scale = eaglview.contentScaleFactor;
    widthInPoints = width / scale;
    heightInPoints = height / scale;
    UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    widthInPoints = width;
    heightInPoints = height;
    UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);

// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
  // Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);

 return image; 
}

有没有办法合并两个CAEaglelayer视图的内容?

请帮忙。

非常感谢。

2 个答案:

答案 0 :(得分:1)

您可以单独创建每个视图的屏幕截图,然后按如下方式组合它们:

UIGraphicsBeginImageContext(canvasSize);

[openGLImage1 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];
[openGLImage2 drawInRect:CGRectMake(0, 0, canvasSize.width, canvasSize.height)];

UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

你应该使用适当的canvasSize和frame来绘制每个生成的UIImage,它只是你如何做到这一点的样本

答案 1 :(得分:0)

有关更好的方法,请参阅here。它基本上允许您将包含所有OpenGL(和其他)视图的较大视图捕获到一个完全组合的屏幕截图中,该屏幕截图与您在屏幕上看到的截图相同。