我想一次制作OpenGLES
和UIKit
的屏幕截图,经过大量研究后我发现了一种完全相同的方式:
- (UIImage *)makeScreenshot {
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
// glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger x = _visibleFrame.origin.x, y = _visibleFrame.origin.y, width = _visibleFrame.size.width, height = _visibleFrame.size.height;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
// CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = _baseView.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
// return image;
UIImageView *GLImage = [[UIImageView alloc] initWithImage:image];
UIGraphicsBeginImageContext(_visibleFrame.size);
//order of getting the context depends on what should be rendered first.
// this draws the UIKit on top of the gl image
[GLImage.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);
[_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Do something with resulting image
return finalImage;
}
但有趣的部分是合并部分。我在哪里有两个
UIGraphicsBeginImageContext();
.......
.......
UIGraphicsEndImageContext();
块。首先生成OpenGLES
图像,然后与UIKit图像合并。有没有更好的方法来使用单UIGraphicsBeginImageContext(); ... UIGraphicsEndImageContext();
块而不是创建UIImageView
,然后执行渲染?
类似的东西:
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// the merging part starts
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);
[_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];
// the merging part ends
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
但不幸的是它没有合并。任何人都可以在这里纠正错误和/或找到最佳方法吗?
答案 0 :(得分:0)
随着iOS 7 Apple推出UISnapshotting
,他们声称它的速度非常快,比renderInContext:
快得多。
UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
此方法捕获屏幕的当前可视内容 渲染服务器并使用它们来构建新的快照视图。您可以 使用返回的快照视图作为屏幕的可视替身 您应用中的内容。 (...)这种方法比尝试更快 自己将屏幕内容渲染成位图图像。
此外,请查看下面的链接。他们应该给你一些见解并指出正确的方向。