操纵CVPixelBufferRef高度和宽度

时间:2017-01-02 09:55:23

标签: ios uiimage gpuimage cvpixelbuffer

我正在使用GPUImage库函数来操纵CVPixelbuffer的高度和宽度。我正在以纵向录制视频,当用户旋转设备时,我的屏幕将自身调整为横向模式。我希望横向框架在屏幕上适合宽高比。

例如: - 我在纵向模式320x568中启动视频,当我将设备转为横向时,我的画面是568x320,我想要适合320x568。为了调整这个东西,我想操纵CVPixelBuffer。但这会占用大量内存,最后我的应用程序崩溃了。

 - (CVPixelBufferRef) GPUImageCreateResizedSampleBufferWithBuffer:(CVPixelBufferRef)cameraFrame withBuffer:(CGSize)finalSize withSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
   CVPixelBufferRef pixel_buffer = NULL;

// CVPixelBufferCreateWithPlanarBytes for YUV input
@autoreleasepool {

    CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));

    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    GLubyte *sourceImageBytes = (GLubyte *)CVPixelBufferGetBaseAddress(cameraFrame);
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
    CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);

    GLubyte *imageData = (GLubyte *) calloc(1, ((int)finalSize.width * (int)finalSize.height * 4));


    CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace,  kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);

    CGRect scaledRect = AVMakeRectWithAspectRatioInsideRect(originalSize, CGRectMake(0, 0, finalSize.width, finalSize.height));

    CGContextDrawImage(imageContext, scaledRect, cgImageFromBytes);
    CGImageRelease(cgImageFromBytes);
    CGContextRelease(imageContext);
    CGColorSpaceRelease(genericRGBColorspace);
    CGDataProviderRelease(dataProvider);

    CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
    CMVideoFormatDescriptionRef videoInfo = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);

    CMTime frameTime = CMTimeMake(1, 30);
    CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};

    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, &sampleBuffer);
    CVPixelBufferUnlockBaseAddress(cameraFrame, 0);
    CFRelease(videoInfo);
    //   CVPixelBufferRelease(pixel_buffer);

}
return pixel_buffer;

}

1 个答案:

答案 0 :(得分:0)

CG * - CoreGraphics使用CPU并且对于实时视频而言太慢,使用CV *和GPU

    // - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
    CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);

    CIImage *baseImg = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    CIImage *resultImg = [baseImg imageByCroppingToRect:outputFrameCropRect];
    resultImg = [resultImg imageByApplyingTransform:outputFrameTransform];

    // created once
    // glCtx = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    // ciContext = [CIContext contextWithEAGLContext:glCtx options:@{kCIContextWorkingColorSpace:[NSNull null]}];
    // ciContextColorSpace = CGColorSpaceCreateDeviceRGB();
    // CVReturn res = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, VTCompressionSessionGetPixelBufferPool(compressionSession), &finishPixelBuffer);

    [ciContext render:resultImg toCVPixelBuffer:finishPixelBuffer bounds:resultImg.extent colorSpace:ciContextColorSpace];

    CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);