使用AVFoundation捕获静止图像并转换为UIImage

时间:2013-11-07 00:01:00

标签: ios objective-c avfoundation

我有关于如何完成这两项任务的各个部分我不知道如何将它们组合在一起。第一个代码块捕获一个Image,但它只是一个图像缓冲区,而不是我可以转换为UIImage的东西。

- (void) captureStillImage
{
    AVCaptureConnection *stillImageConnection = [[self stillImageOutput] connectionWithMediaType:AVMediaTypeVideo];

    [[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
                                                         completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {

                                                             if (imageDataSampleBuffer != NULL) {
                                                                 NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

                                                                 UIImage *captureImage = [[UIImage alloc] initWithData:imageData];


                                                             }

                                                             if ([[self delegate] respondsToSelector:@selector(captureManagerStillImageCaptured:)]) {
                                                                 [[self delegate] captureManagerStillImageCaptured:self];
                                                             }
                                                         }];
}

这是一个获取图像缓冲区并将其转换为UIImage的苹果示例。如何将这两种方法结合起来?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) imageSampleBuffer{

    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);

    if (imageBuffer==NULL) {
        NSLog(@"No buffer");
    }

    // Lock the base address of the pixel buffer
    if((CVPixelBufferLockBaseAddress(imageBuffer, 0))==kCVReturnSuccess){
        NSLog(@"Buffer locked successfully");
    }

    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    NSLog(@"bytes per row %zu",bytesPerRow );
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    NSLog(@"width %zu",width);

    size_t height = CVPixelBufferGetHeight(imageBuffer);
    NSLog(@"height %zu",height);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image= [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    return (image );

}

1 个答案:

答案 0 :(得分:0)

第一个代码块正是您所需要的,并且是一种可接受的方式。你想用第二块做什么?