Duet-并排合并2个视频

时间:2019-04-20 07:24:54

标签: ios swift avfoundation swift4.2

注意:-并排合并视频而不会降低视频质量

我认为这是一个非常重要问题,经过大量搜索和谷歌搜索,没有找到与此问题相关的有用材料。

我正在一个项目中,我需要将视频并排合并到一个文件中。

我已经使用AVFoundation完成了合并视频,但是问题是FIRST Video显示为SECOND视频的叠加层(与 SMULE App / Karaoke App或Tiktok App合并的方式不同) )。

func mergeVideosFilesWithUrl(savedVideoUrl: URL, newVideoUrl: URL, audioUrl:URL)
    {
        let savePathUrl : NSURL = NSURL(fileURLWithPath: NSHomeDirectory() + "/Documents/camRecordedVideo.mp4")
        do { // delete old video
            try FileManager.default.removeItem(at: savePathUrl as URL)
        } catch { print(error.localizedDescription) }

        var mutableVideoComposition : AVMutableVideoComposition = AVMutableVideoComposition()
        var mixComposition : AVMutableComposition = AVMutableComposition()

        let aNewVideoAsset : AVAsset = AVAsset(url: newVideoUrl)
        let asavedVideoAsset : AVAsset = AVAsset(url: savedVideoUrl)

        let aNewVideoTrack : AVAssetTrack = aNewVideoAsset.tracks(withMediaType: AVMediaType.video)[0]
        let aSavedVideoTrack : AVAssetTrack = asavedVideoAsset.tracks(withMediaType: AVMediaType.video)[0]

        let mutableCompositionNewVideoTrack : AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
        do{
            try mutableCompositionNewVideoTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: aNewVideoAsset.duration), of: aNewVideoTrack, at: CMTime.zero)
        }catch {  print("Mutable Error") }

        let mutableCompositionSavedVideoTrack : AVMutableCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
        do{
            try mutableCompositionSavedVideoTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: asavedVideoAsset.duration), of: aSavedVideoTrack , at: CMTime.zero)
        }catch{ print("Mutable Error") }

        let mainInstruction = AVMutableVideoCompositionInstruction()
        mainInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: CMTimeMaximum(aNewVideoAsset.duration, asavedVideoAsset.duration) )

        let newVideoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mutableCompositionNewVideoTrack)
        let newScale : CGAffineTransform = CGAffineTransform.init(scaleX: 0.7, y: 0.7)
        let newMove : CGAffineTransform = CGAffineTransform.init(translationX: 230, y: 230)
        newVideoLayerInstruction.setTransform(newScale.concatenating(newMove), at: CMTime.zero)

        let savedVideoLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: mutableCompositionSavedVideoTrack)
        let savedScale : CGAffineTransform = CGAffineTransform.init(scaleX: 1.2, y: 1.5)
        let savedMove : CGAffineTransform = CGAffineTransform.init(translationX: 0, y: 0)
        savedVideoLayerInstruction.setTransform(savedScale.concatenating(savedMove), at: CMTime.zero)

        mainInstruction.layerInstructions = [newVideoLayerInstruction, savedVideoLayerInstruction]


        mutableVideoComposition.instructions = [mainInstruction]
        mutableVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
        mutableVideoComposition.renderSize = CGSize(width: 1240 , height: self.camPreview.frame.height)

        finalPath = savePathUrl.absoluteString
        let assetExport: AVAssetExportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
        assetExport.videoComposition = mutableVideoComposition
        assetExport.outputFileType = AVFileType.mov

        assetExport.outputURL = savePathUrl as URL
        assetExport.shouldOptimizeForNetworkUse = true

        assetExport.exportAsynchronously { () -> Void in
            switch assetExport.status {

            case AVAssetExportSession.Status.completed:
                print("success")
            case  AVAssetExportSession.Status.failed:
                print("failed \(assetExport.error)")
            case AVAssetExportSession.Status.cancelled:
                print("cancelled \(assetExport.error)")
            default:
                print("complete")
            }
        }

    }

这是我的输出 enter image description here

还有我想要的

enter image description here

由于我不知道制作SIDE BY SIDE VIDEO / DUET VIDEO的最佳方法是什么...到目前为止,我已经使用了AVFoundation。我没有使用任何第三者框架或任何POD。

我想问,实现此目标的最佳方法是什么?视频应该在服务器端还是在应用程序上合并?还有我应该使用哪种方法?

任何帮助都会受到高度赞赏。谢谢

3 个答案:

答案 0 :(得分:0)

要实现这一点,我将创建一个包含2条轨道的新@ECHO OFF set list=c:\temp\input.txt for /F %%C in (%list%) do ( for /F "skip=1" %%S in ('wmic /node:%%C bios get serialnumber') do ( echo %%C - %%S >>c:\temp\output.txt) ) pause 对象,并在每条轨道上设置transform使其并排放置:

DC6068WA00829 -  
DC6054WA00178 -  
DC6061WA00065 - R93NSI3
DC6061WA00065 -  
DC6061WA00064 - R0KBN3S
DC6061WA00064 -  
DC6023LA034284 -  
DC6038LA034272 -  

然后。使用以下方法保存它:

AVMutableComposition

(未测试Swift代码)。

答案 1 :(得分:0)

实际上,AVAssetExportSession是出于简单的需求,对于您的情况来说太简单了。

您必须使用AVAssetWriter

您将AVAssetWriterInput添加到AVAssetWriter中。

您可以使用其AVAssetWriterInput属性来配置transform的转换。

然后,使用AVAssetWriterInput调用向append提供CMSampleBuffer(每个图像缓冲区)。

有关完整示例,请参阅完整的Apple文档:https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/05_Export.html#//apple_ref/doc/uid/TP40010188-CH9-SW2

答案 2 :(得分:0)

这取决于您的视频帧大小(渲染大小)的宽度,如果您的帧渲染宽度是 800。您必须将第二个合并视频转换为 400,(一半一半)。同样,您必须缩小或放大。如下面的示例代码片段所示。

        var renderSize = CGSize(width: 800, height: 534) // 534
        let assetInfo = AVMutableComposition.orientationFromTransform(firstTrack.preferredTransform)
        renderSize = assetInfo.isPortrait ? CGSize(width: renderSize.height, height: renderSize.width) : CGSize(width: renderSize.width, height: renderSize.height)
        
        let scaleRatioX = (renderSize.width / 2) / firstTrack.naturalSize.width
        let scaleRatioY = (renderSize.height) / firstTrack.naturalSize.height
        
        let scaleRatio2X = (renderSize.width / 2) / secondTrack.naturalSize.width
        let scaleRatio2Y = renderSize.height / secondTrack.naturalSize.height
        
        print("Scale Video 1 : \(scaleRatioX), \(scaleRatioY)") // print("Scale Video 1 : \(scaleRatioY), \(scaleRatioX)")
        print("Scale Video 2 : \(scaleRatio2X), \(scaleRatio2Y)")
        
//        let scale  = CGAffineTransform(scaleX: scaleRatioY, y: scaleRatioX) // 1, 1
        let scale  = CGAffineTransform(scaleX: scaleRatioX, y: scaleRatioY)
        let move = CGAffineTransform(translationX: 0, y: 0)

        firstlayerInstruction.setTransform(scale.concatenating(move), at: .zero)
        let secondlayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)

        let secondScale = CGAffineTransform(scaleX: scaleRatio2X, y: scaleRatio2Y) // 0.13, 0.13
        let secondMove = CGAffineTransform(translationX: 400, y: 0) // 160, 0
        secondlayerInstruction.setTransform(secondScale.concatenating(secondMove), at: .zero)