目前我正在使用基于GLpaint的绘图应用程序。保存当前屏幕对我来说是一种忙碌的痛苦。我有一个ViewController,在视图控制器的顶部我已经加载了我的UIIMageView和UIView(PaintingView)。现在看起来我正在绘制UIImageView的顶部。
我已设法通过此问题GLPaint save image获取当前的绘图。 当我尝试捕捉当前的绘图时,我得到了我的绘图,但是黑屏。我想要的是我的背景图像(UIImageView)。我应该用UIImageView覆盖UIView吗?
答案 0 :(得分:2)
您应该使用OpenGL加载图像,而不是UIKit(作为UIImageView)。否则,您只能将OpenGLView捕获为单个图像,并将UIKit视图捕获为不同的图像。
为此,您必须在GLpaint示例中提供的PaintingView类中的Texture中渲染图像,然后通过在绘图视图上绘制四边形来加载它。
答案 1 :(得分:0)
我使用此代码从OpenGL中获取图像:
-(BOOL)iPhoneRetina{
return ([[UIScreen mainScreen] respondsToSelector:@selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))?YES:NO;
}
void releasePixels(void *info, const void *data, size_t size) {
free((void*)data);
}
-(UIImage *) glToUIImage{
int imageWidth, imageHeight;
int scale = [self iPhoneRetina]?2:1;
imageWidth = self.frame.size.width*scale;
imageHeight = self.frame.size.height*scale;
NSInteger myDataLength = imageWidth * imageHeight * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, releasePixels);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * imageWidth;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(imageWidth, imageHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *myImage = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationDownMirrored]; //Render image flipped, since OpenGL's data is mirrored
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return myImage;
}
这个将它与背景图像合并:
-(UIImage*)mergeImage:(UIImage*)image1 withImage:(UIImage*)image2{
CGSize size = image1.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
[image1 drawAtPoint:CGPointMake(0.0f, 0.0f)];
[image2 drawAtPoint:CGPointMake(0.0f, 0.0f)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
这样的事情:
finalImage = [self mergeImage:BackgroundImage withImage [self glToUIImage]];