在以下情况下,将多个图像和视频合并为一个分割的视频-首先在一个视频之后显示一个图像,然后像明智的那样显示

时间:2019-04-01 04:14:43

标签: ios swift avfoundation

我想将多个图像和视频合并到一个视频中,所以我使用了AVFoundation并能够将它们合并为一个扩展,但是主要问题是我可以使用AVVideoCompositionCoreAnimationTool在视频上添加图像,但是在以下情况下无法合并它们-< / p>

首先显示一个图像,然后在图像顶部显示视频层,然后再次显示视频,然后再显示图像

output

因为AVVideoCompositionCoreAnimationTool总是在合并视频的顶部添加图像层,所以我找不到任何有助于在AVVideoCompositionLayerInstruction下方获取图像层的功能,所以我可以排列视频和图像层的Z位置。

为了合并我在以下代码中使用的视频

let aMutableCompositionVideoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
            let aMutableCompositionAudioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)

            if let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video).first  {
                try? aMutableCompositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aVideoAssetTrack, at: .zero)
            }

            if let aAudioAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .audio).first  {
                try? aMutableCompositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aAudioAssetTrack, at: .zero)
            }

为了转换视频层,我使用了此代码(这不是完整的代码,但是您会知道我在这里做什么)

let aArrVideoTracks = mixComposition.tracks(withMediaType: .video)

            for (aIndex,aTrack) in aArrVideoTracks.enumerated(){

                let aLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: aTrack)

                let aFinalTransform = aArrSmallVideoInstructions[aIndex]

                let aCropRect = getAspectRect(FromView: aArrView[aIndex], andTrack: aTrack)

                aLayerInstruction.setTransform(aFinalTransform, at: .zero)
                aLayerInstruction.setCropRectangle(aCropRect, at: .zero)

                print("CropRct-> \(aCropRect)")

                aArrLayerInstructions.append(aLayerInstruction)
            }

            let aTotalTime = aArrVideoTracks.map { $0.timeRange.duration }.max()

            //Note for tomorrow: Have to figure out to set zPosition between images and video layers
            //Chack background color setting below for video

            let aInstruction = AVMutableVideoCompositionInstruction()
//            aInstruction.backgroundColor = UIColor.white.cgColor
            aInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: aTotalTime!)
            aInstruction.layerInstructions = aArrLayerInstructions

            let aVideoComposition = AVMutableVideoComposition()
            aVideoComposition.instructions = [aInstruction]
            aVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)

            //Add overlay image on final video
            aVideoComposition.animationTool = getAnimationTool()

            let aTotalWidth = aArrView[0].superview?.bounds.width
            let aTotalHeight = aArrView[0].superview?.bounds.height
            aVideoComposition.renderSize = CGSize(width: aTotalWidth!, height: aTotalHeight!)

请帮助我获得所需的输出

谢谢

0 个答案:

没有答案