从iPhone相机捕获24 bpp位图中的图像(AVCaptureSession)

时间:2014-04-02 18:20:29

标签: ios iphone objective-c bitmap avcapturesession

我正在使用 AVCaptureSession 从iPhone的前置摄像头捕捉帧。我试图改变 AVCaptureVideoDataOutput 的格式,以便它可以捕获24 bpp的位图。 这段代码为我提供了一个32 bpp的位图,没有任何问题:

AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
outputDevice.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (id)kCVPixelBufferPixelFormatTypeKey];
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

但是,当我将其更改为24时,它会在该行崩溃。

outputDevice.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_24RGB] forKey: (id)kCVPixelBufferPixelFormatTypeKey];

如何在24 bpp中捕获图像?为什么* kCVPixelFormatType_24RGB *失败?解决方法是将32 bmp转换为24 bmp,但我还没有发现如何做到这一点。

1 个答案:

答案 0 :(得分:1)

崩溃是因为iPhone不支持kCVPixelFormatType_24RGB。现代iPhone支持的唯一像素格式为:

  • kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
  • kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
  • kCVPixelFormatType_32BGRA

您可以将其中任何一个转换为RGB,尽管BGRA缓冲区转换起来更简单。有多种方法可以做到这一点(在这里搜索和在Google上搜索示例),但这是一个非常简单的方法:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
   didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
          fromConnection:(AVCaptureConnection *)connection 
{ 
  @autoreleasepool {
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);   
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    uint8_t *sourceBuffer = (uint8_t*)CVPixelBufferGetBaseAddress(imageBuffer);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
    int bufferSize = bytesPerRow * height;
    uint8_t *bgraData = malloc(bufferSize);
    memcpy(bgraData, sourceBuffer, bufferSize);
    uint8_t *rgbData = malloc(width * height * 3);
    int rgbCount = 0;
    for (int i = 0; i < height; i++) {
      for (int ii = 0; ii < width; ii+=4) {
        int current = (i * height)+ii; 
        rgbData[rgbCount] = bgraData[current + 2]; 
        rgbData[rgbCount + 1] = bgraData[current + 1]; 
        rgbData[rgbCount + 2] = bgraData[current]; 
        rgbCount+=3;
      }
    }
    //
    // Process rgbData
    //
    free (rgbData);
  }
}

顺便说一下 - 它是8bpp(不是24bpp);三个八位平面构成24位图像,或四个平面构成32位图像。值得指出的是,在大多数情况下,只需使用32位数据并忽略alpha通道,而不是转换为24位,这可能更容易,更快捷。