iOS视频录制-AVAssetWriterInputPixelBufferAdaptor和CVPixelBuffer不同的色彩空间

时间:2019-06-24 16:58:25

标签: ios objective-c avfoundation

我正在尝试从iOS中的视频流中记录文件中的视频。视频流来自openFrameworks,并希望使用iOS AVFoundation(AVAssetWriter,AVASsetWriterInput,AVASsetWriterInputPixelBufferAdaptor)将视频保存在MP4中。 我已经成功保存了视频,但是输入缓冲区的颜色空间与AVAssetWriterInputPixelBufferAdaptor保存的缓冲区之间存在差异。

This is the input framethis is what gets saved

The buffer definitely contains RGBA pixels (where the last byte is A, it is always ff)

是否可以在AVAssetWriterInputPixelBufferAdaptor中指定其他缓冲区格式?还是唯一的选择是更改缓冲区以匹配ARGB?将缓冲区从一种颜色空间更改为另一种颜色空间的最佳策略是什么?

在创建bufferAttributes时,我确实尝试过更改AVAssetWriterInputPixelBufferAdaptor,但随后收到错误消息。

NSDictionary* bufferAttributes = 
  [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], 
    kCVPixelBufferPixelFormatTypeKey, nil
  ];

如果我将kCVPixelBufferPixelFormatTypeKey更改为kCVPixelFormatType_32RGBA,则在调用CVPixelBufferPoolCreatePixelBuffer时会收到错误消息。

我的程序基于this github repo中的代码。


    // SETUP
    videoWriter = [[AVAssetWriter alloc] initWithURL:[self tempFileURL] fileType:AVFileTypeQuickTimeMovie error:&error];
        NSParameterAssert(videoWriter);

        //Configure video
        NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
                                               nil ];

        NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                       AVVideoCodecH264, AVVideoCodecKey,
                                       [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                       [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                       videoCompressionProps, AVVideoCompressionPropertiesKey,
                                       nil];

        videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings] retain];

        NSParameterAssert(videoWriterInput);
        videoWriterInput.expectsMediaDataInRealTime = YES;
        NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                          [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

        avAdaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes] retain];

        //add input
        [videoWriter addInput:videoWriterInput];
        [videoWriter startWriting];
        [videoWriter startSessionAtSourceTime:CMTimeMake(0, 1000)];



    //ADD VIDEO FRAME
    float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
        CMTime time = CMTimeMake((int)millisElapsed, 1000);

        if (![videoWriterInput isReadyForMoreMediaData]) {
            NSLog(@"Not ready for video data");
        }
        else {
            @synchronized (self) {

                UIImage* newFrame = [videoFrame retain];
                CVPixelBufferRef pixelBuffer = NULL;
                CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
                CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));

                int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
                if(status != 0){
                    //could not get a buffer from the pool
                    NSLog(@"Error creating pixel buffer:  status=%d", status);
                }

                // set image data into pixel buffer
                CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
                uint8_t* destPixels = (uint8_t*) CVPixelBufferGetBaseAddress(pixelBuffer);
                CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);  //XXX:  will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data



                if(status == 0){

                    BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
                    if (!success)
                        NSLog(@"Warning:  Unable to write buffer to video");
                }

                //clean up
                [newFrame release];
                CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
                CVPixelBufferRelease( pixelBuffer );
                CFRelease(image);
                CGImageRelease(cgImage);

          }
       }
    }

我也曾尝试使用此代码将CGImage转换为CVPixelBufferRef,但性能低下并且会泄漏内存。


     NSDictionary *options = @{
                                  (NSString*)kCVPixelBufferCGImageCompatibilityKey : @YES,
                                  (NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : @YES,
                                  };

        CVPixelBufferRef pxbuffer = NULL;
        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                                              CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                              &pxbuffer);
        if (status!=kCVReturnSuccess) {
            NSLog(@"Operation failed");
        }
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                                     CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
                                                     kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);

        CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
        CGAffineTransform flipVertical = CGAffineTransformMake( 1, 0, 0, -1, 0, CGImageGetHeight(image) );
        CGContextConcatCTM(context, flipVertical);
        CGAffineTransform flipHorizontal = CGAffineTransformMake( -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0 );
        CGContextConcatCTM(context, flipHorizontal);

        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                               CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);

        CVPixelBufferUnlockBaseAddress(pxbuffer, 0); 

0 个答案:

没有答案