带有kMTAudioProcessingTapCreationFlag_PostEffects的MTAudioProcessingTap不反映AVAudioMix卷

时间:2017-10-29 08:51:42

标签: ios macos avfoundation avplayer

我正在尝试为AVPlayer构建关卡计量。我这样做的MTAudioProcessingTap传递给AVAudioMix,然后传递给AVPlayerItem。使用MTAudioProcessingTap标记创建kMTAudioProcessingTapCreationFlag_PostEffects Technical Q&A QA1783PreEffectsPostEffects标志有以下说法:

  

当您创建"预效果"音频点击使用kMTAudioProcessingTapCreationFlag_PreEffects标志,在应用AVAudioMixInputParameters指定的任何效果之前将调用tap;当你创造一个"后效应"通过使用kMTAudioProcessingTapCreationFlag_PostEffects标志点击,将在应用这些效果后调用tap。目前唯一的"效果" AVAudioMixInputParameters支持的是线性音量斜坡。

问题:
使用kMTAudioProcessingTapCreationFlag_PostEffects创建时,我希望MTAudioProcessingTap收到的样本会反映AVAudioMixInputParameters上设置的音量或音频斜率。例如,如果我将音量设置为0,我希望得到所有0个样本。然而,我收到的样品似乎完全不受体积或体积斜率的影响。

我做错了吗?

这是一个快速的脏操场,说明了这个问题。该示例直接设置音量,但在使用音频斜坡时我发现了同样的问题。在macOS和iOS上测试过:

import Foundation
import XCPlayground
import PlaygroundSupport
import AVFoundation
import Accelerate

PlaygroundPage.current.needsIndefiniteExecution = true;

let assetURL = Bundle.main.url(forResource: "sample", withExtension: "mp3")!

let asset = AVAsset(url: assetURL)
let playerItem = AVPlayerItem(asset: asset)
var audioMix = AVMutableAudioMix()

// The volume. Set to > 0 to hear something.
let kVolume: Float = 0.0

var parameterArray: [AVAudioMixInputParameters] = []

for assetTrack in asset.tracks(withMediaType: .audio) {

    let parameters = AVMutableAudioMixInputParameters(track: assetTrack);
    parameters.setVolume(kVolume, at: kCMTimeZero)
    parameterArray.append(parameters)

    // Omitting most callbacks to keep sample short:
    var callbacks = MTAudioProcessingTapCallbacks(
        version: kMTAudioProcessingTapCallbacksVersion_0,
        clientInfo: nil,
        init: nil,
        finalize: nil,
        prepare: nil,
        unprepare: nil,
        process: { (tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut) in

            guard MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut) == noErr else {
                preconditionFailure()
            }

            // Assume 32bit float format, native endian:

            for i in 0..<bufferListInOut.pointee.mNumberBuffers {

                let buffer = bufferListInOut.pointee.mBuffers
                let stride: vDSP_Stride = vDSP_Stride(buffer.mNumberChannels)
                let numElements: vDSP_Length = vDSP_Length(buffer.mDataByteSize / UInt32(MemoryLayout<Float>.stride))

                for j in 0..<Int(buffer.mNumberChannels) {

                    // Use vDSP_maxmgv tof ind the maximum amplitude
                    var start = buffer.mData!.bindMemory(to: Float.self, capacity: Int(numElements))
                    start += Int(j * MemoryLayout<Float>.stride)
                    var magnitude: Float = 0
                    vDSP_maxmgv(start, stride, &magnitude, numElements - vDSP_Length(j))

                    DispatchQueue.main.async {
                        print("buff: \(i), chan: \(j), max: \(magnitude)")
                    }
                }
            }
        }
    )

    var tap: Unmanaged<MTAudioProcessingTap>?

    guard MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) == noErr else {
        preconditionFailure()
    }

    parameters.audioTapProcessor = tap?.takeUnretainedValue()

}

audioMix.inputParameters = parameterArray

playerItem.audioMix = audioMix

let player = AVPlayer(playerItem: playerItem)
player.rate = 1.0

0 个答案:

没有答案