AVMutableAudioMix将多个音量更改为单个音轨

时间:2017-11-11 19:20:17

标签: ios swift avfoundation avmutablecomposition

我正在开发一款将多个视频片段合并为一个最终视频的应用。如果需要,我想让用户能够将各个剪辑静音(因此,只有部分最终合并的视频会被静音)。我已经将AVAssets包装在一个名为“Video”的类中,该类具有“shouldMute”属性。

我的问题是,当我将其中一个AVAssetTracks的音量设置为零时,它会对最终视频的剩余部分保持静音。这是我的代码:

    var completeDuration : CMTime = CMTimeMake(0, 1)
    var insertTime = kCMTimeZero
    var layerInstructions = [AVVideoCompositionLayerInstruction]()
    let mixComposition = AVMutableComposition()
    let audioMix = AVMutableAudioMix()

    let videoTrack =
        mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
                                       preferredTrackID: kCMPersistentTrackID_Invalid)
    let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)


    // iterate through video assets and merge together
    for (i, video) in clips.enumerated() {

        let videoAsset = video.asset
        var clipDuration = videoAsset.duration

        do {
            if video == clips.first {
                insertTime = kCMTimeZero
            } else {
                insertTime = completeDuration
            }


            if let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first {
                try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: videoAssetTrack, at: insertTime)
                completeDuration = CMTimeAdd(completeDuration, clipDuration)
            }

            if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
                try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)

                if video.shouldMute {
                    let audioMixInputParams = AVMutableAudioMixInputParameters()
                    audioMixInputParams.trackID = audioTrack!.trackID
                    audioMixInputParams.setVolume(0.0, at: insertTime)
                    audioMix.inputParameters.append(audioMixInputParams)
                }
            }

        } catch let error as NSError {
            print("error: \(error)")
        }

        let videoInstruction = videoCompositionInstructionForTrack(track: videoTrack!, video: video)
        if video != clips.last{
            videoInstruction.setOpacity(0.0, at: completeDuration)
        }

        layerInstructions.append(videoInstruction)
        } // end of video asset iteration

如果我添加另一个setVolume:atTime指令以在剪辑结束时将音量增加回1.0,则第一个音量指令将被完全忽略,整个视频将以全音量播放。

换句话说,这不起作用:

if video.shouldMute {
                    let audioMixInputParams = AVMutableAudioMixInputParameters()
                    audioMixInputParams.trackID = audioTrack!.trackID
                    audioMixInputParams.setVolume(0.0, at: insertTime)
                    audioMixInputParams.setVolume(1.0, at: completeDuration)
                    audioMix.inputParameters.append(audioMixInputParams)
                }

我已经在我的AVPlayerItem和AVAssetExportSession上设置了audioMix。我究竟做错了什么?在合并到最终视频之前,我该怎么做才能让用户将各个剪辑的时间范围静音?

1 个答案:

答案 0 :(得分:3)

显然我错了。如上所示,我的作品有两个AVMutableCompositionTracks:一个视频轨道和一个音轨。即使我将一系列其他曲目的时间范围插入到这两首曲目中,但最终只有两首曲目。所以,我只需要一个AVMutableAudioMixInputParameters对象来关联我的一个音轨。

我初始化了一个AVMutableAudioMixInputParameters对象,然后,在我插入每个剪辑的时间范围后,我会检查它是否应该被静音并为剪辑的时间范围设置音量斜坡(与整个音轨相关的时间范围)。在我的剪辑迭代中,这就是看起来的样子:

config/initializers/new_framework_defaults.rb