glReadPixel()返回黑色图像

时间:2014-04-11 12:58:38

标签: ios objective-c opengl-es avcapturesession glreadpixels

我使用AVCaptureSession进行实时摄像机查看,然后在相机覆盖视图上渲染一些图像。我没有使用任何EAGLView,只是使用带预览层的AVCaptureSession覆盖了一些图像。我想拍摄截图实时相机送图像和叠加图像。所以,我搜索了一些最终得到glReadPixel()的链接,但是当我实现这个代码时,它返回黑色图像。刚刚添加了OpenGLES.framework库并导入。

- (void)viewDidLoad
 {
 [super viewDidLoad];

 [self setCaptureSession:[[AVCaptureSession alloc] init]];


 [self  addVideoInputFrontCamera:NO]; // set to YES for Front Camera, No for Back camera

 [self  addStillImageOutput];

 [self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] ];



 [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];



 CGRect layerRect = [[[self view] layer] bounds];


 [[self previewLayer]setBounds:layerRect];
 [[self  previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];
 [[[self view] layer] addSublayer:[self  previewLayer]];


 [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(saveImageToPhotoAlbum) name:kImageCapturedSuccessfully object:nil];


 [[self captureSession] startRunning];


 UIImageView *dot =[[UIImageView alloc] initWithFrame:CGRectMake(50,50,200,200)];
 dot.image=[UIImage imageNamed:@"draw.png"];
 [self.view addSubview:dot];

 }

使用glReadPixel()捕获具有叠加内容的实时相机Feed:

 - (UIImage*) glToUIImage

 {
 CGFloat scale = [[UIScreen mainScreen] scale];
 // CGRect s = CGRectMake(0, 0, 320.0f * scale, 480.0f * scale);

 CGRect  s = CGRectMake(0, 0, 768.0f * scale, 1024.0f * scale);

 uint8_t *buffer = (uint8_t *) malloc(s.size.width * s.size.height * 4);



 glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

 CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);


 CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);


 size_t width = CGImageGetWidth(iref);
 size_t height = CGImageGetHeight(iref);
 size_t length = width * height * 4;
 uint32_t *pixels = (uint32_t *)malloc(length);



 CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4,
 CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);


 CGAffineTransform transform = CGAffineTransformIdentity;
 transform = CGAffineTransformMakeTranslation(0.0f, height);
 transform = CGAffineTransformScale(transform, 1.0, -1.0);
 CGContextConcatCTM(context1, transform);
 CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
 CGImageRef outputRef = CGBitmapContextCreateImage(context1);


 outputImage = [UIImage imageWithCGImage: outputRef];


 CGDataProviderRelease(ref);
 CGImageRelease(iref);
 CGContextRelease(context1);
 CGImageRelease(outputRef);
 free(pixels);
 free(buffer);




 UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);

 NSLog(@"Screenshot size: %d, %d", (int)[outputImage size].width, (int)[outputImage size].height);

 return outputImage;

 }


 -(void)screenshot:(id)sender{

 [self glToUIImage];


 }

但它会返回黑色图像。

enter image description here

1 个答案:

答案 0 :(得分:0)

glReadPixels()无法使用AV基础预览图层。没有OpenGL ES上下文来捕获像素,即使有,也需要在场景呈现给显示器之前从中捕获像素。

如果您要尝试捕获覆盖在相机上的实时视频上的图像,我的GPUImage框架可以为您处理。你需要做的就是设置一个GPUImageVideoCamera,一个你需要覆盖的GPUImagePicture实例,以及某种混合过滤器。然后,您可以将输出提供给GPUImageView进行显示,并且能够在任何时候从混合滤镜中捕获静止图像。该框架为您解决了所有这些问题。