在Ios中合并两个视频

时间:2018-05-16 08:28:10

标签: ios swift xcode video avfoundation

我可以合并两个视频,但是当我看到最终结果时,视频的持续时间是正确的,但它只播放第一个视频,并且在第二个视频的持续时间内仍然是静态图像。 例如: 每个6秒的两个视频制作12秒的视频,我可以正确看到它直到6秒,之后它会阻止图像

func mergeVideos(videoMergedUrl:URL) {
    let mainComposition = AVMutableVideoComposition()
    var startDuration:CMTime = kCMTimeZero
    let mainInstruction = AVMutableVideoCompositionInstruction()
    let mixComposition = AVMutableComposition()
    var allVideoInstruction = [AVMutableVideoCompositionLayerInstruction]()

    for i:Int in 0 ..< listSegment.count {
        let currentAsset = listSegment[i]
        let currentTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
        do {
            try currentTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, currentAsset.duration), of: currentAsset.tracks(withMediaType: AVMediaType.video)[0], at: startDuration)
            let currentInstruction:AVMutableVideoCompositionLayerInstruction = videoCompositionInstructionForTrack(currentTrack!, asset: currentAsset)
            //currentInstruction.setOpacityRamp(fromStartOpacity: 0.0, toEndOpacity: 1.0, timeRange:CMTimeRangeMake(startDuration, CMTimeMake(1, 1)))
            /*if i != assets.count - 1 {
                //Sets Fade out effect at the end of the video.
                currentInstruction.setOpacityRamp(fromStartOpacity: 1.0,
                                                  toEndOpacity: 0.0,
                                                  timeRange:CMTimeRangeMake(
                                                    CMTimeSubtract(
                                                        CMTimeAdd(currentAsset.duration, startDuration),
                                                        CMTimeMake(1, 1)),
                                                    CMTimeMake(2, 1)))
            }*/
            /*let transform:CGAffineTransform = currentTrack!.preferredTransform

            if orientationFromTransform(transform).isPortrait {
                let outputSize:CGSize = CGSize(width: 640, height: 480)
                let horizontalRatio = CGFloat(outputSize.width) / (currentTrack?.naturalSize.width)!
                let verticalRatio = CGFloat(outputSize.height) / (currentTrack?.naturalSize.height)!
                let scaleToFitRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
                let FirstAssetScaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
                if currentAsset.g_orientation == .landscapeLeft {
                    let rotation = CGAffineTransform(rotationAngle: .pi)
                    let translateToCenter = CGAffineTransform(translationX: 640, y: 480)
                    let mixedTransform = rotation.concatenating(translateToCenter)
                    currentInstruction.setTransform((currentTrack?.preferredTransform.concatenating(FirstAssetScaleFactor).concatenating(mixedTransform))!, at: kCMTimeZero)
                } else {
                    currentInstruction.setTransform((currentTrack?.preferredTransform.concatenating(FirstAssetScaleFactor))!, at: kCMTimeZero)
                }
            }*/

            allVideoInstruction.append(currentInstruction) //Add video instruction in Instructions Array.
            startDuration = CMTimeAdd(startDuration, currentAsset.duration)
        } catch _ {
            print("ERROR_LOADING_VIDEO")
        }
    }

    mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, startDuration)
    mainInstruction.layerInstructions = allVideoInstruction

    mainComposition.instructions = [mainInstruction]
    mainComposition.frameDuration = CMTimeMake(1, 30)
    mainComposition.renderSize = CGSize(width: 640, height: 480)

    let manager = FileManager.default
    _ = try? manager.removeItem(at: videoMergedUrl)

    guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset640x480) else { return }
    exporter.outputURL = videoMergedUrl
    exporter.outputFileType = AVFileType.mp4
    exporter.shouldOptimizeForNetworkUse = false
    exporter.videoComposition = mainComposition

    // Perform the Export
    exporter.exportAsynchronously() {
        DispatchQueue.main.async {
            self.exportDidFinish(exporter)
        }
    }
}

1 个答案:

答案 0 :(得分:0)

遵循this tutorial之后,我遇到了同样的问题。我通过使用yourStream .filter(..) .findAny() //returns Optional .ifPresentOrElse( // action when value exists value -> System.out.println("There was a value "+value), // action when there is no value () -> System.out.println("No value found") ); 而不是AVMutableComposition.insertTimeRange将剪辑添加到合成中来解决了这个问题。