我的图片大小为640 x480
,我希望使用yv12作为输出格式。一些API正在处理图像,我希望得到YV12格式的输出。
我现在正在尝试的是
int width = 640;
int height = 480;byte [] byteArray = new byte [Convert.ToInt32((width * height +(2 *((width * height)/ 4)))* 1.5)];
captureDevice.GetPreviewBufferYCbCr(字节阵列)“);
int YLength = Convert.ToInt32(width * height * 1.5);
// UVlength multipled by 2 because it is for both u and v
int UVLength = Convert.ToInt32(2 * (width / 2) * (weight / 2)) * 1.5);
var inputYBuffer = byteArray.AsBuffer(0, YLength);
var inputUVBuffer = byteArray.AsBuffer(YLength, UVLength);
var inputBuffers = new IBuffer[] { inputYBuffer, inputUVBuffer };
var inputScanlines = new uint[] { (uint)YLength, (uint)UVLength };
/////Creating input buffer that supports Nv12 format
Bitmap inputBtm = new Bitmap(
outputBufferSize,
ColorMode.Yvu420Sp,
inputScanlines, // 1.5 bytes per pixel in Yuv420Sp mode
inputBuffers);
//As v length is same as u length in output buffer so v length can be used in both place.
int vLength = UVLength / 2;
var outputYBuffer = byteArray.AsBuffer(0, YLength);
var outputVBuffer = byteArray.AsBuffer(YLength, vLength);
var outputUBuffer = byteArray.AsBuffer(YLength + vLength, vLength);
var outputBuffers = new IBuffer[] { outputYBuffer, outputVBuffer, outputUBuffer };
//
var outputScanlines = new uint[] { (uint)YLength, (uint)vLength, (uint)vLength };
Bitmap outputBtm = new Bitmap(
outputBufferSize,
ColorMode.Yuv420P,
outputScanlines,
outputBuffers);**strong text**
所以我想问的是我是否正在根据YUV420P格式正确创建outputbuffer。有一个API,我在其中传递NV12的输入缓冲区,我假设它将给我YV12(YUV420P)格式的输出缓冲区。所以我创建了Y平面,它有width x height x 1.5
个字节。 1.5因为YV12对于垫子是12比特,并且类似地U盘具有width/2 x height/2 x 1.5
字节并且类似地是V平面。我不关心YUV420P和YVU420P,只要我的格式正确,我只需要交换U和V平面。
答案 0 :(得分:7)
YUV 4:2:0平面看起来像这样:
----------------------
| Y | Cb|Cr |
----------------------
其中:
Y = width x height pixels (bytes)
Cb = Y / 4 pixels (bytes)
Cr = Y / 4 pixels (bytes)
Total num pixels (bytes) = width * height * 3 / 2
这是像素放置在4:2:0子采样中的方式:
如您所见,每个色度值在4个亮度像素之间共享。