使用AVFoundation(OSX)向视频添加过滤器 - 如何将生成的图像写回AVWriter?

时间:2014-04-02 17:58:17

标签: macos avfoundation avassetwriter ciimage cmsamplebuffer

设置场景

我正在开发一个视频处理应用程序,它从命令行运行,读入,处理然后导出视频。我正在使用4首曲目。

  1. 我附加到单个曲目中制作一个视频的大量剪辑。我们称之为ugcVideoComposition。
  2. 将Alpha定位在第二首曲目并使用图层指令的剪辑,在导出时设置为合成,以便在ugcVideoComposition的顶部播放。
  3. 音乐音轨。
  4. ugcVideoComposition的音轨,包含附加到单曲目中的剪辑中的音频。
  5. 我已经完成了所有工作,可以使用AVExportSession将其合成并正确导出。

    问题

    我现在想要做的是将过滤器和渐变应用于ugcVideoComposition。

    到目前为止,我的研究表明,这是通过使用AVReader和AVWriter,提取CIImage,使用过滤器对其进行操作然后将其写出来完成的。

    我还没有完成上述所有功能,但我已经设法使用AssetReader和AssetWriter读取ugcVideoComposition并将其写回磁盘。

        BOOL done = NO;
        while (!done)
        {
            while ([assetWriterVideoInput isReadyForMoreMediaData] && !done)
            {
                CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
                if (sampleBuffer)
                {
                    // Let's try create an image....
                    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                    CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer];
    
                    // < Apply filters and transformations to the CIImage here
    
                    // < HOW TO GET THE TRANSFORMED IMAGE BACK INTO SAMPLE BUFFER??? >
    
                    // Write things back out.
                    [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
    
                    CFRelease(sampleBuffer);
                    sampleBuffer = NULL;
                }
                else
                {
                    // Find out why we couldn't get another sample buffer....
                    if (assetReader.status == AVAssetReaderStatusFailed)
                    {
                        NSError *failureError = assetReader.error;
                        // Do something with this error.
                    }
                    else
                    {
                        // Some kind of success....
                        done = YES;
                        [assetWriter finishWriting];
    
                    }
                }
             }
          }
    

    正如你所看到的,我甚至可以从CMSampleBuffer获得CIImage,我相信我可以解决如何操作图像并应用我需要的任何效果等。我不知道怎么做的是将得到的操作图像返回到SampleBuffer中,这样我就可以把它再写出来。

    问题

    鉴于CIImage,如何将其添加到sampleBuffer中以使用assetWriter附加它?

    任何帮助表示赞赏 - AVFoundation文档非常糟糕,要么错过了关键点(比如在你提取图像后如何将图像放回原处,或者专注于将图像渲染到iPhone屏幕上,这不是我想要的要做。

    非常感谢和感谢!

2 个答案:

答案 0 :(得分:3)

尝试使用:SDAVAssetExportSession

SDAVAssetExportSession on GITHub

然后实施委托来处理像素

- (void)exportSession:(SDAVAssetExportSession *)exportSession renderFrame:(CVPixelBufferRef)pixelBuffer withPresentationTime:(CMTime)presentationTime toBuffer:(CVPixelBufferRef)renderBuffer

{ Do CIImage and CIFilter inside here }

答案 1 :(得分:1)

我最终通过挖掘Apple的大量半完整样本和糟糕的AVFoundation文档找到了解决方案。

最大的困惑是,虽然在高水平,AVFoundation是合理的&#34; iOS和OSX之间的一致性,较低级别的项目表现不同,具有不同的方法和不同的技术。该解决方案适用于OSX。

设置AssetWriter

首先要确保在设置资产编写器时,添加适配器以从CVPixelBuffer读入。此缓冲区将包含已修改的帧。

    // Create the asset writer input and add it to the asset writer.
    AVAssetWriterInput *assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[[videoTracks objectAtIndex:0] mediaType] outputSettings:videoSettings];
    // Now create an adaptor that writes pixels too!
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                   assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput
                                                 sourcePixelBufferAttributes:nil];
    assetWriterVideoInput.expectsMediaDataInRealTime = NO;
    [assetWriter addInput:assetWriterVideoInput];

阅读和写作

这里的挑战是我无法在iOS和OSX之间找到直接比较的方法 - iOS能够直接向PixelBuffer渲染上下文,而OSX不支持该选项。 iOS和OSX之间的上下文配置也不同。

请注意,您还应该将QuartzCore.Framework包含到XCode项目中。

在OSX上创建上下文。

    CIContext *context = [CIContext contextWithCGContext:
                      [[NSGraphicsContext currentContext] graphicsPort]
                                             options: nil]; // We don't want to always create a context so we put it outside the loop

现在你想要循环,读取AssetReader并写入AssetWriter ......但请注意,你是通过之前创建的适配器而不是使用SampleBuffer编写的。

    while ([adaptor.assetWriterInput isReadyForMoreMediaData] && !done)
    {
        CMSampleBufferRef sampleBuffer = [videoCompositionOutput copyNextSampleBuffer];
        if (sampleBuffer)
        {
            CMTime currentTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);

            // GRAB AN IMAGE FROM THE SAMPLE BUFFER
            CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                     [NSNumber numberWithInt:640.0], kCVPixelBufferWidthKey,
                                     [NSNumber numberWithInt:360.0], kCVPixelBufferHeightKey,
                                     nil];

            CIImage *inputImage = [CIImage imageWithCVImageBuffer:imageBuffer options:options];

            //-----------------
            // FILTER IMAGE - APPLY ANY FILTERS IN HERE

            CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];
            [filter setDefaults];
            [filter setValue: inputImage forKey: kCIInputImageKey];
            [filter setValue: @1.0f forKey: kCIInputIntensityKey];

            CIImage *outputImage = [filter valueForKey: kCIOutputImageKey];


            //-----------------
            // RENDER OUTPUT IMAGE BACK TO PIXEL BUFFER
            // 1. Firstly render the image
            CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

            // 2. Grab the size
            CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

            // 3. Convert the CGImage to a PixelBuffer
            CVPixelBufferRef pxBuffer = NULL;
            // pixelBufferFromCGImage is documented below.
            pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

            // 4. Write things back out.
            // Calculate the frame time
            CMTime frameTime = CMTimeMake(1, 30); // Represents 1 frame at 30 FPS
            CMTime presentTime=CMTimeAdd(currentTime, frameTime); // Note that if you actually had a sequence of images (an animation or transition perhaps), your frameTime would represent the number of images / frames, not just 1 as I've done here.

            // Finally write out using the adaptor.
            [adaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

            CFRelease(sampleBuffer);
            sampleBuffer = NULL;
        }
        else
        {
            // Find out why we couldn't get another sample buffer....
            if (assetReader.status == AVAssetReaderStatusFailed)
            {
                NSError *failureError = assetReader.error;
                // Do something with this error.
            }
            else
            {
                // Some kind of success....
                done = YES;
                [assetWriter finishWriting];
            }
        }
    }
}

创建PixelBuffer

必须有一种更简单的方法,但是现在,这是有效的,并且是我在OSX上直接从CIImage到PixelBuffer(通过CGImage)获得的唯一方法。以下代码是从AVFoundation + AssetWriter: Generate Movie With Images and Audio

剪切并粘贴的
    - (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];
        CVPixelBufferRef pxbuffer = NULL;

        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                                      size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                      &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);

        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                             size.height, 8, 4*size.width, rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                       CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);

        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

        return pxbuffer;
    }