AVFoundation覆盖不同视频上的文本,然后组合并导出

时间:2015-06-22 15:15:20

标签: ios swift video avfoundation overlay

我遇到了一些简单的AVFoundation问题,似乎无法在Stack或互联网上找到答案。我正在尝试拍摄8个视频,分别在每个视频上叠加文字,然后将它们合并为一个完整的视频。我成功地将它们组合在一起,但出于某种原因,我似乎无法掌握如何首先在它们上面添加文本层。

我一直在使用Ray Wenderlich's tutorial,这太棒了。但是,我似乎无法弄清楚我的具体情况。以下是我到目前为止用于组合视频的代码。谢谢你的帮助!

        var mainComposition = AVMutableComposition()
        var videoCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
        var audioCompositionTrack = mainComposition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
        var insertTime = kCMTimeZero

        var videoCompositionLocal = AVMutableVideoComposition()

        for (index, playerItem) in enumerate(flipsArray) {

            var videoAsset = playerItem.asset
            var word = self.words![index]

            let videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
            let videoTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]

            videoCompositionTrack.insertTimeRange(videoTimeRange,
                                             ofTrack: videoTrack as AVAssetTrack,
                                             atTime: insertTime,
                                             error: nil)


            let audioTimeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
            let audioTrack: AnyObject = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]

            audioCompositionTrack.insertTimeRange(audioTimeRange,
                ofTrack: audioTrack as AVAssetTrack,
                atTime: insertTime,
                error: nil)

            insertTime = CMTimeAdd(insertTime, videoAsset.duration)
        }

        // 4 - Get path
        let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,
                                                        .UserDomainMask,
                                                        true);
        let documentsDirectory = paths[0] as NSString;
        let myPathDocs = documentsDirectory.stringByAppendingPathComponent("flip-\(arc4random() % 1000).mov")

        let url = NSURL.fileURLWithPath(myPathDocs)

        // 5 - Create exporter
        var exporter = AVAssetExportSession(asset: mainComposition,
                                            presetName: AVAssetExportPresetMediumQuality)

        println("-------------")
        println(url)
        println("-------------")

        exporter.outputURL = url
        exporter.outputFileType = AVFileTypeQuickTimeMovie
        exporter.shouldOptimizeForNetworkUse = true
        exporter.exportAsynchronouslyWithCompletionHandler({
            switch exporter.status {
            case  AVAssetExportSessionStatus.Failed:
                println("Merge/export failed: \(exporter.error)")
            case AVAssetExportSessionStatus.Cancelled:
                println("Merge/export cancelled: \(exporter.error)")
            default:
                println("Merge/export complete.")
                self.exportDidFinish(exporter)
            }
        })

修改 我已将文字覆盖在视频上。现在的问题是文本根本不会动画(改变单词)。我的目标是每X秒更改一次文本值,其中x是当前视频片段的长度。帮助!

1 个答案:

答案 0 :(得分:0)

如果有人发现它有用,我找到了解决问题的方法。而不是将所有8个视频拼接在一起,然后在导出之前在顶部应用动画层,并使用定时单词切换...

...我单独导出了每个视频,并在其顶部显示了overlay'd。然后调用另一种方法将这8个新导出的视频拼接成一个视频。由于单词更改与资产的持续时间完全匹配,因此这对我来说非常有效。

希望这有助于某人!