kCVPixelFormatType_420YpCbCr8BiPlanarFullRange帧到UIImage的转换

时间:2012-01-12 16:23:25

标签: iphone objective-c ios avfoundation

我有一个应用程序以kCVPixelFormatType_420YpCbCr8BiPlanarFullRange格式捕获实时视频以处理Y通道。根据Apple的文档:

  

kCVPixelFormatType_420YpCbCr8BiPlanarFullRange   双平面分量Y'CbCr 8位4:2:0,全范围(亮度= [0,255]色度= [1,255])。 baseAddr指向big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar结构。

我想在UIViewController中展示一些这些帧,是否有任何API可以转换为kCVPixelFormatType_32BGRA格式?您能否提供一些提示来调整Apple提供的这种方法?

// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer  {
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

谢谢!

3 个答案:

答案 0 :(得分:15)

我不知道在iOS中将双平面Y / CbCr图像转换为RGB的任何可访问的内置方式。但是,您应该能够在软件中自行执行转换,例如

uint8_t clamp(int16_t input)
{
    // clamp negative numbers to 0; assumes signed shifts
    // (a valid assumption on iOS)
    input &= ~(num >> 16);

    // clamp numbers greater than 255 to 255; the accumulation
    // of the mask looks odd but is an attempt to avoid
    // pipeline stalls
    uint8_t saturationMask = num >> 8;
    saturationMask |= saturationMask << 4;
    saturationMask |= saturationMask << 2;
    saturationMask |= saturationMask << 1;
    num |= saturationMask;

    return num&0xff;
}

...

CVPixelBufferLockBaseAddress(imageBuffer, 0);

size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

uint8_t *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;

NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);

NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);

uint8_t *rgbBuffer = malloc(width * height * 3);
uint8_t *yBuffer = baseAddress + yOffset;
uint8_t *cbCrBuffer = baseAddress + cbCrOffset;

for(int y = 0; y < height; y++)
{
    uint8_t *rgbBufferLine = &rgbBuffer[y * width * 3];
    uint8_t *yBufferLine = &yBuffer[y * yPitch];
    uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

    for(int x = 0; x < width; x++)
    {
        // from ITU-R BT.601, rounded to integers
        uint8_t y = yBufferLine[x] - 16;
        uint8_t cb = cbCrBufferLine[x & ~1] - 128;
        uint8_t cr = cbCrBufferLine[x | 1] - 128;

        uint8_t *rgbOutput = &rgbBufferLine[x*3];

        rgbOutput[0] = clamp(((298 * y + 409 * cr - 223) >> 8) - 223);
        rgbOutput[1] = clamp(((298 * y - 100 * cb - 208 * cr + 136) >> 8) + 136);
        rgbOutput[2] = clamp(((298 * y + 516 * cb - 277) >> 8) - 277);
    }

}

直接写入此框并未经测试,我认为我的cb / cr提取正确。然后,您使用CGBitmapContextCreatergbBuffer一起创建CGImage,从而创建UIImage

答案 1 :(得分:15)

如果您更改videoOrientation中的AVCaptureConnection(由于某种原因我不完全理解,CVPlanarPixelBufferInfo_YCbCrBiPlanar,我发现的大多数实施(包括此处的上一个答案)将无效在这种情况下struct会为空),所以我写了一个(大部分代码基于this answer)。我的实现还向RGB缓冲区添加了一个空的alpha通道,并使用CGBitmapContext标志创建kCGImageAlphaNoneSkipLast(没有alpha数据,但iOS似乎每个像素需要4个字节)。这是:

#define clamp(a) (a>255?255:(a<0?0:a))

- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    uint8_t *yBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
    size_t yPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
    uint8_t *cbCrBuffer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 1);
    size_t cbCrPitch = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1);

    int bytesPerPixel = 4;
    uint8_t *rgbBuffer = malloc(width * height * bytesPerPixel);

    for(int y = 0; y < height; y++) {
        uint8_t *rgbBufferLine = &rgbBuffer[y * width * bytesPerPixel];
        uint8_t *yBufferLine = &yBuffer[y * yPitch];
        uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

        for(int x = 0; x < width; x++) {
            int16_t y = yBufferLine[x];
            int16_t cb = cbCrBufferLine[x & ~1] - 128;
            int16_t cr = cbCrBufferLine[x | 1] - 128;

            uint8_t *rgbOutput = &rgbBufferLine[x*bytesPerPixel];

            int16_t r = (int16_t)roundf( y + cr *  1.4 );
            int16_t g = (int16_t)roundf( y + cb * -0.343 + cr * -0.711 );
            int16_t b = (int16_t)roundf( y + cb *  1.765);

            rgbOutput[0] = 0xff;
            rgbOutput[1] = clamp(b);
            rgbOutput[2] = clamp(g);
            rgbOutput[3] = clamp(r);
        }
    }

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(rgbBuffer, width, height, 8, width * bytesPerPixel, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(quartzImage);
    free(rgbBuffer);

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    return image;
}

答案 2 :(得分:0)

这些其他有关位移和魔术变量的答案很疯狂。这是在Swift 5中使用Accelerate框架的另一种方法。它从像素格式为kCVPixelFormatType_420YpCbCr8BiPlanarFullRange(双平面分量Y'CbCr 8位4:2:0)的缓冲区中获取一帧,并生成{{1 }},然后将其转换为UIImage。但是您可以修改它以处理任何输入/输出格式:

ARGB8888