如何将BGRA字节转换为UIImage进行保存?

时间:2017-05-08 16:14:14

标签: ios iphone uiimage avcapturesession

我想使用GPUImage框架捕获原始像素数据以进行操作。我捕获这样的数据:

 CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
     CVPixelBufferLockBaseAddress(cameraFrame, 0);
     GLubyte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
     NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];

     //raw values
     UInt32 *values = [dataForRawBytes bytes];//, cnt = [dataForRawBytes length]/sizeof(int);

     //test out dropbox upload here
     [self uploadDropbox:dataForRawBytes];
     //end of dropbox upload


     // Do whatever with your bytes
     //         [self processImages:dataForRawBytes];

     CVPixelBufferUnlockBaseAddress(cameraFrame, 0);     }];

我对相机使用以下设置:

 NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey,[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];

出于测试目的,我想将捕获的图像保存到dropbox,为此,我需要将其保存到tmp目录,如何保存dataForRawBytes? 任何帮助将非常感谢!

1 个答案:

答案 0 :(得分:0)

所以我能够弄清楚如何从原始数据中获取UIImage,这是我修改后的代码:

CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(imageSampleBuffer);
     CVPixelBufferLockBaseAddress(cameraFrame, 0);
     Byte *rawImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
     size_t bytesPerRow = CVPixelBufferGetBytesPerRow(cameraFrame);
     size_t width = CVPixelBufferGetWidth(cameraFrame);
     size_t height = CVPixelBufferGetHeight(cameraFrame);
     NSData *dataForRawBytes = [NSData dataWithBytes:rawImageBytes length:bytesPerRow * CVPixelBufferGetHeight(cameraFrame)];
     // Do whatever with your bytes

     // create suitable color space
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

     //Create suitable context (suitable for camera output setting kCVPixelFormatType_32BGRA)
     CGContextRef newContext = CGBitmapContextCreate(rawImageBytes, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

     CVPixelBufferUnlockBaseAddress(cameraFrame, 0);

     // release color space
     CGColorSpaceRelease(colorSpace);

     //Create a CGImageRef from the CVImageBufferRef
     CGImageRef newImage = CGBitmapContextCreateImage(newContext);
     UIImage *FinalImage = [[UIImage alloc] initWithCGImage:newImage];
     //is the image captured, now we can test saving it.

我需要创建colourspace等属性并生成CDContexyRef并使用它来最终获得UIImage,并且在调试时我可以正确地看到我捕获的图像。