将CVImageBuffer转换为YUV420对象

时间:2015-01-11 17:16:47

标签: ios opencv avfoundation

我想从相机维护YUV420格式的流媒体视频以避免转换为灰度的惩罚,但我还想保留颜色分量。最终目标是使用OpenCV等计算机视觉库进行处理。虽然我最终可能会选择BGRA,但我仍然希望能够使用YUV测试工作解决方案。那么如何将像素格式kCVPixelFormatType_420YpCbCr8BiPlanarFullRange的CVImageBuffer转换为单个内存块呢?

拒绝解决方案:

  • CIImage非常方便,但不允许渲染为YUV格式的位图。
  • cv :: Mat用C ++污染您的Obj-C代码

1 个答案:

答案 0 :(得分:2)

AVCaptureSessionDelegate

这将根据指定的像素格式将数据填充到包含字节的NSObject中。我继续前进,为BGRA或YUV像素格式提供检测和malloc内存的能力。所以这个解决方案非常适合测试这两个。

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVImageBufferRef videoImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    CVPixelBufferLockBaseAddress(videoImageBuffer, 0);

    void *baseAddress = NULL;
    NSUInteger totalBytes = 0;
    size_t width = CVPixelBufferGetWidth(videoImageBuffer);
    size_t height = 0;
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
    OSType pixelFormat = CVPixelBufferGetPixelFormatType(videoImageBuffer);
    if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange ||
        pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
        size_t planeCount = CVPixelBufferGetPlaneCount(videoImageBuffer);
        baseAddress = CVPixelBufferGetBaseAddressOfPlane(videoImageBuffer, 0);

        for (int plane = 0; plane < planeCount; plane++) {
            size_t planeHeight = CVPixelBufferGetHeightOfPlane(videoImageBuffer, plane);
            size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(videoImageBuffer, plane);
            height += planeHeight;
            totalBytes += (int)planeHeight * (int)bytesPerRow;
        }
    } else if (pixelFormat == kCVPixelFormatType_32BGRA) {
        baseAddress = CVPixelBufferGetBaseAddress(videoImageBuffer);
        height = CVPixelBufferGetHeight(videoImageBuffer);
        size_t bytesPerRow = CVPixelBufferGetBytesPerRow(videoImageBuffer);
        totalBytes += (int)height * (int)bytesPerRow;
    }

    // Doesn't have to be an NSData object
    NSData *rawPixelData = [NSData dataWithBytes:baseAddress length:totalBytes];

    // Just a plain-ol-NSObject with the following properties
    NTNUVideoFrame *videoFrame = [[NTNUVideoFrame alloc] init];
    videoFrame.width = width;
    videoFrame.height = height;
    videoFrame.bytesPerRow = bytesPerRow;
    videoFrame.pixelFormat = pixelFormat; 
    // Alternatively if you switch rawPixelData to void *
    // videoFrame.rawPixelData = baseAddress;
    videoFrame.rawPixelData = rawPixelData;
    [self.delegate didUpdateVideoFrame:videoFrame];

    CVPixelBufferUnlockBaseAddress(videoImageBuffer, 0);
}

您唯一需要记住的是,如果您计划切换线程,则需要mallocmemcpy基地址,或者调度_async并且您不使用{{1 }}。解锁基址后,像素数据将不再有效。

NSData

此时你需要考虑在完成后在该内存块上调用free的问题。