我正在为iPhone制作一个涂料应用程序。在我的代码中,我使用的是包含轮廓图像的imageView,我在其上放置CAEAGLLayer来填充轮廓图像中的颜色。现在我使用函数
截取OpenGL ES [CAEAGLLayer]渲染内容的截图- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
使用函数
将此屏幕截图与大纲图像相结合- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
使用功能
将此图像保存到photoalbum -(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Success" message:@"Image saved to Photo Album" delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
以上代码正在运行,但屏幕截图的图像质量很差。在画笔的轮廓上,有一个灰色的轮廓。我上传了我的应用程序的截图,这是opengles内容和组合的组合。的UIImage。
有没有办法让视网膜显示opengles-CAEaglelayer内容的截图。
提前谢谢!
答案 0 :(得分:4)
我不相信这个决议是你的问题。如果您在绘图上出现在屏幕上时没有看到灰色轮廓,则可能是您在保存过程中观察到压缩工件。您的图像可能会被保存为质量较低的JPEG图像,其中的图像会出现在锐边上,就像图中的图像一样。
要解决此问题,Ben Weiss的回答here提供了以下代码,用于强制将图像作为PNG保存到照片库中:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
虽然这可能是解决此问题的最简单方法,但您也可以尝试使用多重采样抗锯齿,正如Apple在iOS的OpenGL ES编程指南的“Using Multisampling to Improve Image Quality”部分所述。根据您的填充率限制,MSAA可能会导致您的应用程序出现一些减速。
答案 1 :(得分:1)
创建CG位图上下文时,您正在使用kCGImageAlphaPremultipliedLast。虽然我看不到你的OpenGL代码,但我似乎不太可能你的OpenGL上下文正在渲染预乘alpha。不幸的是,IIRC,不可能在iOS上创建一个非预乘的CG位图上下文(它将使用kCGImageAlphaLast,但我认为这只会使创建调用失败),因此您可能需要手动预乘数据从OpenGL获取它并制作CG上下文。
另一方面,您的OpenGL上下文是否有一个alpha通道?你可以把它变成不透明的白色然后使用kCGImageAlphaNoneSkipLast吗?