我正在尝试使用OpenGL-ES以特定格式420YpCbCr8BiPlanarFullRange渲染视频中的帧。
上下文如下:我使用OpenCV从IOS设备处理了一个帧,然后将其转换为UIImage,然后将其转换为CVImageBufferRef,以便我可以使用OpenGL-ES渲染它。
如此简单:IOS设备 - > openCV Mat - > UIImage - > CVImageBufferRef - >使用OpenGL ES进行渲染
在 CGBitmapContextCreate 的步骤中,我收到此错误:
Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 5184 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedLast.
Jul 20 16:36:01
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
检查Stack Overflow中的所有相关帖子后,我发现这个错误的问题是因为每行的数字字节。所以我打印出来了,我明白了:
[INFO IMAGE] Width 1296.000000, Height 968.000000, Byte per row 1948
实际上,根据这个post,我没有每行的字节数至少是所代表图像宽度的四倍。
但我真的不明白在这个过程中我的错误是什么。 UIImage有很好的代表性,有4个频道。
这里是代码:
UIImage* imageTest = MatToUIImage(imageCV);
CGImageRef imageRef=[imageTest CGImage];
NSLog(@"Size UIIMage: (%f,%f)", imageTest.size.width,imageTest.size.height);
CVImageBufferRef pixelBuffer = [self pixelBufferFromCGImage:imageRef];
执行转换的主要功能:
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
// set pixel buffer attributes so we get an iosurface
NSDictionary *pixelBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
(__bridge CFDictionaryRef)pixelBufferAttributes, &pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
NSLog(@"[INFO IMAGE] Width %f, Height %f, Byte per row %zi",frameSize.width,frameSize.height,CVPixelBufferGetBytesPerRow(pixelBuffer));
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
// get y plane
const uint8_t* yDestPlane = reinterpret_cast<uint8_t*> (CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0));
// get cbCr plane
const uint8_t* uvDestPlane = reinterpret_cast<uint8_t*> (CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1));
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Little);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
NSLog(@"[INFO IMAGE] return pixel buffer done");
return pixelBuffer;
}
这是我的像素缓冲区的输出:
<CVPixelBuffer 0x127f5bf60 width=1296 height=968 pixelFormat=420f iosurface=0x12b900628 planes=2>
<Plane 0 width=1296 height=968 bytesPerRow=1296>
<Plane 1 width=648 height=484 bytesPerRow=1296>
<attributes=<CFBasicHash 0x127f47260 [0x1a1462150]>{type = immutable dict, count = 2,
entries =>
0 : <CFString 0x19ce8fca8 [0x1a1462150]>{contents = "PixelFormatDescription"} = <CFBasicHash 0x127e7f900 [0x1a1462150]>{type = immutable dict, count = 10,
entries =>
0 : <CFString 0x19ce8ff08 [0x1a1462150]>{contents = "Planes"} = <CFArray 0x127e7f6f0 [0x1a1462150]>{type = mutable-small, count = 2, values = (
0 : <CFBasicHash 0x127e7f940 [0x1a1462150]>{type = mutable dict, count = 3,
entries =>
0 : <CFString 0x19ce901a8 [0x1a1462150]>{contents = "FillExtendedPixelsCallback"} = <CFData 0x127e7fac0 [0x1a1462150]>{length = 24, capacity = 24, bytes = 0x000000000000000030535685010000000000000000000000}
1 : <CFString 0x19ce8ffa8 [0x1a1462150]>{contents = "BitsPerBlock"} = <CFNumber 0xb000000000000082 [0x1a1462150]>{value = +8, type = kCFNumberSInt32Type}
2 : <CFString 0x19ce8ffc8 [0x1a1462150]>{contents = "BlackBlock"} = <CFData 0x127e7f980 [0x1a1462150]>{length = 1, capacity = 1, bytes = 0x00}
}
1 : <CFBasicHash 0x127e7fb50 [0x1a1462150]>{type = mutable dict, count = 5,
entries =>
2 : <CFString 0x19ce8ffe8 [0x1a1462150]>{contents = "HorizontalSubsampling"} = <CFNumber 0xb000000000000022 [0x1a1462150]>{value = +2, type = kCFNumberSInt32Type}
3 : <CFString 0x19ce8ffc8 [0x1a1462150]>{contents = "BlackBlock"} = <CFData 0x127e7fbb0 [0x1a1462150]>{length = 2, capacity = 2, bytes = 0x8080}
4 : <CFString 0x19ce8ffa8 [0x1a1462150]>{contents = "BitsPerBlock"} = <CFNumber 0xb000000000000102 [0x1a1462150]>{value = +16, type = kCFNumberSInt32Type}
5 : <CFString 0x19ce90008 [0x1a1462150]>{contents = "VerticalSubsampling"} = <CFNumber 0xb000000000000022 [0x1a1462150]>{value = +2, type = kCFNumberSInt32Type}
6 : <CFString 0x19ce901a8 [0x1a1462150]>{contents = "FillExtendedPixelsCallback"} = <CFData 0x127e7fc80 [0x1a1462150]>{length = 24, capacity = 24, bytes = 0x0000000000000000ac515685010000000000000000000000}
}
你能帮帮我吗?我检查了所有相关的帖子,但我找不到让它正常工作的方法。