我使用VTDecompressionSession通过网络解码H.264流。我需要从给定的图像缓冲区复制YUV缓冲区。我已经验证了给定的imageBuffer的typeID等于CVPixelBufferGetTypeID()
。
但每当我尝试检索缓冲区或任何平面的基地址时,它们总是返回NULL。 iOS传递的OSStatus是0,所以我的假设是这里没有错。也许我不知道如何提取数据。有人可以帮忙吗?
void decompressionCallback(void * CM_NULLABLE decompressionOutputRefCon,
void * CM_NULLABLE sourceFrameRefCon,
OSStatus status,
VTDecodeInfoFlags infoFlags,
CM_NULLABLE CVImageBufferRef imageBuffer,
CMTime presentationTimeStamp,
CMTime presentationDuration )
{
CFShow(imageBuffer);
size_t dataSize = CVPixelBufferGetDataSize(imageBuffer);
void * decodedBuffer = CVPixelBufferGetBaseAddress(imageBuffer);
memcpy(pYUVBuffer, decodedBuffer, dataSize);
}
编辑:此处还有CVImageBufferRef对象的转储。有点可疑的是,我希望有3架飞机(Y,U和V)。但是只有两架飞机。我的期望是使用CVPixelBufferGetBaseAddressOfPlane
来提取每个数据平面。我实现这一点是为了消除对单独软件编解码器的依赖,因此我需要以这种方式提取每个平面,因为我的渲染管道的其余部分都需要它。
{type = immutable dict,count = 5,entries => 0:{contents =" PixelFormatDescription"} = {type = immutable dict,count = 10,entries => 0:{contents =" Planes"} = {type = mutable-small,count = 2, values =(0:{type = mutable dict,count = 3,entries => 0:{contents =" FillExtendedPixelsCallback"} = {length = 24,capacity = 24,bytes = 0x000000000000000030139783010000000000000000000000} 1:{contents =" BitsPerBlock"} = {value = + 8,type = kCFNumberSInt32Type} 2:{contents =" BlackBlock"} = {length = 1,capacity = 1,bytes = 0x10}}
1:{type = mutable dict, count = 5,entries => 2:{contents =" HorizontalSubsampling"} = {value = + 2,type = kCFNumberSInt32Type} 3:{contents =" BlackBlock"} = {length = 2,capacity = 2,bytes = 0x8080} 4: {contents =" BitsPerBlock"} = {value = + 16,type = kCFNumberSInt32Type} 5:{contents =" VerticalSubsampling"} = {value = + 2,type = kCFNumberSInt32Type} 6:{contents =" FillExtendedPixelsCallback"} = {length = 24,capacity = 24,bytes = 0x0000000000000000ac1197830100000000000000000000000000}}
)} 2:{contents = " IOSurfaceOpenGLESFBOCompatibility"} = {value = true} 3:{contents =" ContainsYCbCr"} = {value = true} 4:{contents =" IOSurfaceOpenGLESTextureCompatibility"} = {value = true} 5:{contents =" ComponentRange"} = {contents =" VideoRange"} 6:{contents =" PixelFormat"} = {value = +875704438,类型= kCFNumberSInt32Type} 7:{contents =" IOSurfaceCoreAnimationCompatibility"} = {value = true} 9:{contents =" ContainsAlpha"} = {value = false} 10:{contents =" ContainsRGB"} = {value = false} 11:{contents = " OpenGLESCompatibility"} = {value = true}}
2:{contents = " ExtendedPixelsRight"} = {value = + 8,type = kCFNumberSInt32Type} 3:{contents =" ExtendedPixelsTop"} = {value = + 0,type = kCFNumberSInt32Type} 4:{contents =" ExtendedPixelsLeft"} = {value = + 0,type = kCFNumberSInt32Type} 5:{contents =" ExtendedPixelsBottom"} = {value = + 0,type = kCFNumberSInt32Type}} propagatedAttachments = {type = mutable dict,count = 7,entries => 0: {contents = " CVImageBufferChromaLocationTopField"} =左1:{contents =" CVImageBufferYCbCrMatrix"} = {contents =" ITU_R_601_4"} 2: {contents =" ColorInfoGuessedBy"} = {contents =" VideoToolbox"} 5:{contents = " CVImageBufferColorPrimaries"} = SMPTE_C 8:{contents =" CVImageBufferTransferFunction"} = {contents =" ITU_R_709_2"} 10:{contents = " CVImageBufferChromaLocationBottomField"} =左12:{contents =" CVFieldCount"} = {value = +1,type = kCFNumberSInt32Type}} nonPropagatedAttachments = {type = mutable dict,count = 0,entries => }
答案 0 :(得分:2)
所以你的格式为kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v'
,两个平面对4:2:0 YUV数据有意义,因为第一个平面是全尺寸Y单通道位图,第二个是半宽半高UV双通道位图。
你是对的,对于你应该调用CVPixelBufferGetBaseAddressOfPlane
的平面数据,虽然你应该能够使用CVPixelBufferGetBaseAddress
,将其结果解释为CVPlanarPixelBufferInfo_YCbCrBiPlanar
,所以问题可能是你之后CVPixelBufferLockBaseAddress
之前没有呼叫CVPixelBufferGetBaseAddress*
,之后也没有呼叫CVPixelBufferUnlockBaseAddress
。
从这里你可以通过编写一些有趣的YUV-> RGB着色器代码,使用Metal或OpenGL有效地显示2个YUV平面。