使用AVFoundation混合图像和视频

时间:2014-10-20 22:19:28

标签: ios macos avfoundation cgimage caanimation

我正在尝试将图像拼接成预先存在的视频,以便在Mac上使用AVFoundation创建新的视频文件。

到目前为止,我已阅读Apple文档示例

  

ASSETWriterInput for making Video from UIImages on Iphone Issues

     

Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool

     

AVFoundation Tutorial: Adding Overlays and Animations to Videos以及其他一些SO链接

现在这些已经被证明是非常有用的,但我的问题是我没有创建静态水印或覆盖,我想在视频的各个部分之间插入图像。 到目前为止,我已经设法获取视频并创建空白部分以插入这些图像并将其导出。

我的问题是让图像在这些空白部分插入自己。我可以看到可行的唯一方法是创建一系列动画层,以便在正确的时间改变它们的不透明度,但我似乎无法让动画工作。

以下代码是我用来创建视频片段和图层动画的代码。

    //https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_Editing.html#//apple_ref/doc/uid/TP40010188-CH8-SW7

    // let's start by making our video composition
    AVMutableComposition* mutableComposition = [AVMutableComposition composition];
    AVMutableCompositionTrack* mutableCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

    AVMutableVideoComposition* mutableVideoComposition = [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:gVideoAsset];

    // if the first point's frame doesn't start on 0
    if (gFrames[0].startTime.value != 0)
    {
        DebugLog("Inserting vid at 0");
        // then add the video track to the composition track with a time range from 0 to the first point's startTime
        [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, gFrames[0].startTime) ofTrack:gVideoTrack atTime:kCMTimeZero error:&gError];

    }

    if(gError)
    {
        DebugLog("Error inserting original video segment");
        GetError();
    }

    // create our parent layer and video layer
    CALayer* parentLayer = [CALayer layer];
    CALayer* videoLayer = [CALayer layer];

    parentLayer.frame = CGRectMake(0, 0, 1280, 720);
    videoLayer.frame = CGRectMake(0, 0, 1280, 720);

    [parentLayer addSublayer:videoLayer];

    // create an offset value that should be added to each point where a new video segment should go
    CMTime timeOffset = CMTimeMake(0, 600);

    // loop through each additional frame
    for(int i = 0; i < gFrames.size(); i++)
    {
    // create an animation layer and assign it's content to the CGImage of the frame
        CALayer* Frame = [CALayer layer];
        Frame.contents = (__bridge id)gFrames[i].frameImage;
        Frame.frame = CGRectMake(0, 720, 1280, -720);

        DebugLog("inserting empty time range");
        // add frame point to the composition track starting at the point's start time
        // insert an empty time range for the duration of the frame animation
        [mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];

        // update the time offset by the duration
        timeOffset = CMTimeAdd(timeOffset, gFrames[i].duration);

        // make the layer completely transparent
        Frame.opacity = 0.0f;

        // create an animation for setting opacity to 0 on start
        CABasicAnimation* frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;

        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];

        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero;
        frameAnim.speed = 1.0f;

        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];

        // create an animation for setting opacity to 1
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;

        frameAnim.fromValue = [NSNumber numberWithFloat:1.0];
        frameAnim.toValue = [NSNumber numberWithFloat:1.0];

        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].startTime);
        frameAnim.speed = 1.0f;

        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];

        // create an animation for setting opacity to 0
        frameAnim = [CABasicAnimation animationWithKeyPath:@"opacity"];
        frameAnim.duration = 1.0f;
        frameAnim.repeatCount = 0;
        frameAnim.autoreverses = NO;

        frameAnim.fromValue = [NSNumber numberWithFloat:0.0];
        frameAnim.toValue = [NSNumber numberWithFloat:0.0];

        frameAnim.beginTime = AVCoreAnimationBeginTimeAtZero + CMTimeGetSeconds(gFrames[i].endTime);
        frameAnim.speed = 1.0f;

        [Frame addAnimation:frameAnim forKey:@"animateOpacity"];

        // add the frame layer to our parent layer
        [parentLayer addSublayer:Frame];

        gError = nil;

        // if there's another point after this one
        if( i < gFrames.size()-1)
        {
            // add our video file to the composition with a range of this point's end and the next point's start
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime,
                            CMTimeMake(gFrames[i+1].startTime.value - gFrames[i].startTime.value, 600))
                            ofTrack:gVideoTrack
                            atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];

        }
        // else just add our video file with a range of this points end point and the videos duration
        else
        {
            [mutableCompositionTrack insertTimeRange:CMTimeRangeMake(gFrames[i].startTime, CMTimeSubtract(gVideoAsset.duration, gFrames[i].startTime)) ofTrack:gVideoTrack atTime:CMTimeAdd(gFrames[i].startTime, timeOffset) error:&gError];
        }

        if(gError)
        {
            char errorMsg[256];
            sprintf(errorMsg, "Error inserting original video segment at: %d", i);
            DebugLog(errorMsg);
            GetError();
        }
    }

现在在该片段中,Frame的不透明度设置为0.0f,但是当我将其设置为1.0f时,只需将这些帧中的最后一个放置在视频的整个持续时间之上。

之后使用AVAssetExportSession导出视频,如下所示

mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];

    // create a layer instruction for our newly created animation tool
    AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:gVideoTrack];

    AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    [instruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [mutableComposition duration])];
    [layerInstruction setOpacity:1.0f atTime:kCMTimeZero];
    [layerInstruction setOpacity:0.0f atTime:mutableComposition.duration];
    instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];

    // set the instructions on our videoComposition
    mutableVideoComposition.instructions = [NSArray arrayWithObject:instruction];

    // export final composition to a video file

    // convert the videopath into a url for our AVAssetWriter to create a file at
    NSString* vidPath = CreateNSString(outputVideoPath);
    NSURL* vidURL = [NSURL fileURLWithPath:vidPath];

    AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPreset1280x720];

    exporter.outputFileType = AVFileTypeMPEG4;

    exporter.outputURL = vidURL;
    exporter.videoComposition = mutableVideoComposition;
    exporter.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);

    // Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
    [exporter exportAsynchronouslyWithCompletionHandler:^{
        dispatch_async(dispatch_get_main_queue(), ^{
            if (exporter.status == AVAssetExportSessionStatusCompleted)
            {
                DebugLog("!!!file created!!!");
                _Close();
            }
            else if(exporter.status == AVAssetExportSessionStatusFailed)
            {
                DebugLog("failed damn");
                DebugLog(cStringCopy([[[exporter error] localizedDescription] UTF8String]));
                DebugLog(cStringCopy([[[exporter error] description] UTF8String]));
                _Close();
            }
            else
            {
                DebugLog("NoIdea");
                _Close();
            }
        });
    }];


}

我觉得动画没有开始,但我不知道。我是否正确地将图像数据拼接成这样的视频?

非常感谢任何协助。

1 个答案:

答案 0 :(得分:3)

我用另一种方式解决了我的问题。动画路线不起作用,所以我的解决方案是将所有可插入的图像编译成临时视频文件,并使用该视频将图像插入到我的最终输出视频中。

从我最初发布的第一个链接ASSETWriterInput for making Video from UIImages on Iphone Issues开始,我创建了以下函数来创建我的临时视频

void CreateFrameImageVideo(NSString* path)
{
    NSLog(@"Creating writer at path %@", path);
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4
                                                              error:&error];

    NSLog(@"Creating video codec settings");
    NSDictionary *codecSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   [NSNumber numberWithInt:gVideoTrack.estimatedDataRate/*128000*/], AVVideoAverageBitRateKey,
                                   [NSNumber numberWithInt:gVideoTrack.nominalFrameRate],AVVideoMaxKeyFrameIntervalKey,
                                   AVVideoProfileLevelH264MainAutoLevel, AVVideoProfileLevelKey,
                                   nil];

    NSLog(@"Creating video settings");
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   codecSettings,AVVideoCompressionPropertiesKey,
                                   [NSNumber numberWithInt:1280], AVVideoWidthKey,
                                   [NSNumber numberWithInt:720], AVVideoHeightKey,
                                   nil];

    NSLog(@"Creating writter input");
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                        assetWriterInputWithMediaType:AVMediaTypeVideo
                                        outputSettings:videoSettings] retain];

    NSLog(@"Creating adaptor");
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                     sourcePixelBufferAttributes:nil];

    [videoWriter addInput:writerInput];

    NSLog(@"Starting session");
    //Start a session:
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];


    CMTime timeOffset = kCMTimeZero;//CMTimeMake(0, 600);

    NSLog(@"Video Width %d, Height: %d, writing frame video to file", gWidth, gHeight);

    CVPixelBufferRef buffer;

    for(int i = 0; i< gAnalysisFrames.size(); i++)
    {
        while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
            NSLog(@"Waiting inside a loop");
            NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
            [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
        }

        //Write samples:
        buffer = pixelBufferFromCGImage(gAnalysisFrames[i].frameImage, gWidth, gHeight);

        [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];



        timeOffset = CMTimeAdd(timeOffset, gAnalysisFrames[i].duration);
    }

    while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
        NSLog(@"Waiting outside a loop");
        NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
        [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
    }

    buffer = pixelBufferFromCGImage(gAnalysisFrames[gAnalysisFrames.size()-1].frameImage, gWidth, gHeight);
    [adaptor appendPixelBuffer:buffer withPresentationTime:timeOffset];

    NSLog(@"Finishing session");
    //Finish the session:
    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:timeOffset];
    BOOL successfulWrite = [videoWriter finishWriting];

    // if we failed to write the video
    if(!successfulWrite)
    {

        NSLog(@"Session failed with error: %@", [[videoWriter error] description]);

        // delete the temporary file created
        NSFileManager *fileManager = [NSFileManager defaultManager];
        if ([fileManager fileExistsAtPath:path]) {
            NSError *error;
            if ([fileManager removeItemAtPath:path error:&error] == NO) {
                NSLog(@"removeItemAtPath %@ error:%@", path, error);
            }
        }
    }
    else
    {
        NSLog(@"Session complete");
    }

    [writerInput release];

}

创建视频后,将其作为AVAsset加载,然后提取它的轨道,然后通过替换以下行(来自原始帖子中的第一个代码块)插入视频

[mutableCompositionTrack insertEmptyTimeRange:CMTimeRangeMake(CMTimeAdd(gFrames[i].startTime, timeOffset), gFrames[i].duration)];

使用:

[mutableCompositionTrack insertTimeRange:CMTimeRangeMake(timeOffset,gAnalysisFrames[i].duration)
                                     ofTrack:gFramesTrack
                                     atTime:CMTimeAdd(gAnalysisFrames[i].startTime, timeOffset) error:&gError];

其中gFramesTrack是从临时帧视频创建的AVAssetTrack。

所有与CALayer和CABasicAnimation对象相关的代码都已删除,因为它不起作用。

不是最优雅的解决方案,我不会想到,但至少可以起作用。我希望有人觉得这很有用。

此代码也适用于iOS设备(使用iPad 3测试)

旁注:第一篇文章中的DebugLog函数只是对打印日志消息的函数的回调,如果需要,可以用NSLog()调用替换它们。