如何转换YUV至iOS版CIImage

时间:2014-09-04 07:30:13

标签: ios image-processing uiimage yuv ciimage

我正在尝试将YUV图像转换为CIIMage并最终转换为UIImage。我是相当新手,试图找出一个简单的方法来做到这一点。从我所学到的,从iOS6 YUV可以直接用于创建CIImage,但是当我试图创建它时,CIImage只保留一个零值。我的代码是这样的 - >

NSLog(@"Started DrawVideoFrame\n");

CVPixelBufferRef pixelBuffer = NULL;

CVReturn ret = CVPixelBufferCreateWithBytes(
                                            kCFAllocatorDefault, iWidth, iHeight, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
                                            lpData, bytesPerRow, 0, 0, 0, &pixelBuffer
                                            );

if(ret != kCVReturnSuccess)
{
    NSLog(@"CVPixelBufferRelease Failed");
    CVPixelBufferRelease(pixelBuffer);
}

NSDictionary *opt =  @{ (id)kCVPixelBufferPixelFormatTypeKey :
                      @(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) };

CIImage *cimage = [CIImage imageWithCVPixelBuffer:pixelBuffer options:opt];
NSLog(@"CURRENT CIImage -> %p\n", cimage);

UIImage *image = [UIImage imageWithCIImage:cimage scale:1.0 orientation:UIImageOrientationUp];
NSLog(@"CURRENT UIImage -> %p\n", image);

这里lpData是YUV数据,它是一个无符号字符数组。

这看起来也很有趣: vImageMatrixMultiply ,无法在此找到任何示例。任何人都可以帮我这个吗?

2 个答案:

答案 0 :(得分:4)

我也面临过类似的问题。我试图将YUV(NV12)格式的数据显示到屏幕上。这个解决方案正在我的项目中......

//YUV(NV12)-->CIImage--->UIImage Conversion
NSDictionary *pixelAttributes = @{kCVPixelBufferIOSurfacePropertiesKey : @{}};
CVPixelBufferRef pixelBuffer = NULL;

CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                      640,
                                      480,
                                      kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
                                      (__bridge CFDictionaryRef)(pixelAttributes),
                                      &pixelBuffer);

CVPixelBufferLockBaseAddress(pixelBuffer,0);
unsigned char *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

// Here y_ch0 is Y-Plane of YUV(NV12) data.
memcpy(yDestPlane, y_ch0, 640 * 480); 
unsigned char *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);

// Here y_ch1 is UV-Plane of YUV(NV12) data. 
memcpy(uvDestPlane, y_ch1, 640*480/2);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

if (result != kCVReturnSuccess) {
    NSLog(@"Unable to create cvpixelbuffer %d", result);
}

// CIImage Conversion    
CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *MytemporaryContext = [CIContext contextWithOptions:nil];
CGImageRef MyvideoImage = [MytemporaryContext createCGImage:coreImage
                                                    fromRect:CGRectMake(0, 0, 640, 480)];

// UIImage Conversion
UIImage *Mynnnimage = [[UIImage alloc] initWithCGImage:MyvideoImage 
                                                 scale:1.0 
                                           orientation:UIImageOrientationRight];

CVPixelBufferRelease(pixelBuffer);
CGImageRelease(MyvideoImage);

这里我展示了YUV(NV12)数据的数据结构以及我们如何获得用于创建CVPixelBufferRef的Y平面(y_ch0)和UV平面(y_ch1)。让我们看一下YUV(NV12)数据结构.. enter image description here 如果我们查看图片,我们可以获得有关YUV(NV12)的信息,

  • 总帧大小=宽*高* 3/2,
  • Y平面尺寸=尺寸* 2/3,
  • UV平面尺寸=尺寸* 1/3,
  • 存储在Y平面中的数据 - > {Y1,Y2,Y3,Y4,Y5 .....}。
  • U-Plane - >(U1,V1,U2,V2,U3,V3,......}。

我希望它对所有人都有帮助。 :)享受IOS开发的乐趣:D

答案 1 :(得分:0)

如果您有一个如下所示的视频帧对象:

int width, 
int height, 
unsigned long long time_stamp,
unsigned char *yData, 
unsigned char *uData, 
unsigned char *vData,
int yStride 
int uStride 
int vStride

您可以使用以下内容填充pixelBuffer:

NSDictionary *pixelAttributes = @{(NSString *)kCVPixelBufferIOSurfacePropertiesKey:@{}};
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferCreate(kCFAllocatorDefault,
                                        width,
                                        height,
                                        kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,   //  NV12
                                        (__bridge CFDictionaryRef)(pixelAttributes),
                                        &pixelBuffer);
if (result != kCVReturnSuccess) {
    NSLog(@"Unable to create cvpixelbuffer %d", result);
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
unsigned char *yDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
for (int i = 0, k = 0; i < height; i ++) {
    for (int j = 0; j < width; j ++) {
        yDestPlane[k++] = yData[j + i * yStride]; 
    }
}
unsigned char *uvDestPlane = (unsigned char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
for (int i = 0, k = 0; i < height / 2; i ++) {
    for (int j = 0; j < width / 2; j ++) {
        uvDestPlane[k++] = uData[j + i * uStride]; 
        uvDestPlane[k++] = vData[j + i * vStride]; 
    }
}

现在您可以将其转换为CIImage

CIImage *coreImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *tempContext = [CIContext contextWithOptions:nil];
CGImageRef coreImageRef = [tempContext createCGImage:coreImage
                                        fromRect:CGRectMake(0, 0, width, height)];

还有UIImage(如果需要)。 (图像方向取决于您的输入)

UIImage *myUIImage = [[UIImage alloc] initWithCGImage:coreImageRef
                                    scale:1.0
                                    orientation:UIImageOrientationUp];

别忘了释放变量:

CVPixelBufferRelease(pixelBuffer);
CGImageRelease(coreImageRef);