在Swift中使用AVAudioEngine轻触麦克风输入

时间:2014-11-29 14:59:39

标签: ios swift avfoundation core-audio ios8.1

我对新的AVAudioEngine感到非常兴奋。它似乎是音频单元的一个很好的API包装器。不幸的是,到目前为止文档还不存在,而且我在使用简单的图表时遇到了问题。

使用以下简单代码设置音频引擎图表,永远不会调用点击块。它模仿了浮动在网络上的一些示例代码,尽管这些代码也不起作用。

let inputNode = audioEngine.inputNode
var error: NSError?
let bus = 0

inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) { 
    (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
    println("sfdljk")
}

audioEngine.prepare()
if audioEngine.startAndReturnError(&error) {
    println("started audio")
} else {
    if let engineStartError = error {
        println("error starting audio: \(engineStartError.localizedDescription)")
    }
}

我正在寻找的是用于分析的原始pcm缓冲液。我不需要任何效果或输出。根据WWDC讲话" 502 Audio Engine in Practice",这种设置应该有效。

  

现在,如果您想从输入节点捕获数据,可以安装一个节点点击,我们已经讨论过了。

     

但是这个特殊例子的有趣之处在于,如果我只想使用输入节点,只需从麦克风中捕获数据并进行检查,实时分析或者将其写出来文件,我可以直接在输入节点上安装一个水龙头。

     

点击将完成拉动数据输入节点的工作,将其填充到缓冲区中,然后将其返回给应用程序。

     

获得该数据后,您可以随心所欲地做任何事情。

以下是我尝试的一些链接:

  1. http://hondrouthoughts.blogspot.com/2014/09/avfoundation-audio-monitoring.html
  2. http://jamiebullock.com/post/89243252529/live-coding-audio-with-swift-playgrounds(在STARTAndReturnError的操场上的SIGABRT)
  3. 编辑:这是基于Thorsten Karrer建议的实施。遗憾的是,它不起作用。

    class AudioProcessor {
        let audioEngine = AVAudioEngine()
    
        init(){
            let inputNode = audioEngine.inputNode
            let bus = 0
            var error: NSError?
    
            inputNode.installTapOnBus(bus, bufferSize: 2048, format:inputNode.inputFormatForBus(bus)) {
                (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
                    println("sfdljk")
            }
    
            audioEngine.prepare()
            audioEngine.startAndReturnError(nil)
            println("started audio")
        }
    }
    

4 个答案:

答案 0 :(得分:20)

可能是你的AVAudioEngine超出范围并由ARC发布(“如果你喜欢它,那么你应该保留它......”)。

以下代码(引擎被移动到ivar并因此粘住)会触发水龙头:

class AppDelegate: NSObject, NSApplicationDelegate {

    let audioEngine  = AVAudioEngine()

    func applicationDidFinishLaunching(aNotification: NSNotification) {
        let inputNode = audioEngine.inputNode
        let bus = 0
        inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) {
            (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
            println("sfdljk")
        }

        audioEngine.prepare()
        audioEngine.startAndReturnError(nil)
    }
}

(为简洁起见,我删除了错误处理)

答案 1 :(得分:11)

更新:我已经实现了录制麦克风输入的完整工作示例,在运行时应用了一些效果(混响,延迟,失真),并将所有这些效果保存到输出文件中。

var engine = AVAudioEngine()
var distortion = AVAudioUnitDistortion()
var reverb = AVAudioUnitReverb()
var audioBuffer = AVAudioPCMBuffer()
var outputFile = AVAudioFile()
var delay = AVAudioUnitDelay()

//初始化音频引擎

func initializeAudioEngine() {

    engine.stop()
    engine.reset()
    engine = AVAudioEngine()

    isRealTime = true
    do {
        try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord)

        let ioBufferDuration = 128.0 / 44100.0

        try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)

    } catch {

        assertionFailure("AVAudioSession setup error: \(error)")
    }

    let fileUrl = URLFor("/NewRecording.caf")
    print(fileUrl)
    do {

        try outputFile = AVAudioFile(forWriting:  fileUrl!, settings: engine.mainMixerNode.outputFormatForBus(0).settings)
    }
    catch {

    }

    let input = engine.inputNode!
    let format = input.inputFormatForBus(0)

    //settings for reverb
    reverb.loadFactoryPreset(.MediumChamber)
    reverb.wetDryMix = 40 //0-100 range
    engine.attachNode(reverb)

    delay.delayTime = 0.2 // 0-2 range
    engine.attachNode(delay)

    //settings for distortion
    distortion.loadFactoryPreset(.DrumsBitBrush)
    distortion.wetDryMix = 20 //0-100 range
    engine.attachNode(distortion)


    engine.connect(input, to: reverb, format: format)
    engine.connect(reverb, to: distortion, format: format)
    engine.connect(distortion, to: delay, format: format)
    engine.connect(delay, to: engine.mainMixerNode, format: format)

    assert(engine.inputNode != nil)

    isReverbOn = false

    try! engine.start()
}

//现在录制功能:

func startRecording() {

    let mixer = engine.mainMixerNode
    let format = mixer.outputFormatForBus(0)

    mixer.installTapOnBus(0, bufferSize: 1024, format: format, block:
        { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in

            print(NSString(string: "writing"))
            do{
                try self.outputFile.writeFromBuffer(buffer)
            }
            catch {
                print(NSString(string: "Write failed"));
            }
    })
}

func stopRecording() {

    engine.mainMixerNode.removeTapOnBus(0)
    engine.stop()
}

我希望这对你有所帮助。谢谢!

答案 2 :(得分:2)

上述答案对我没有用,但以下情况确实如此。我在调音台节点上安装了一个水龙头。

        mMixerNode?.installTapOnBus(0, bufferSize: 4096, format: mMixerNode?.outputFormatForBus(0),
    {
        (buffer: AVAudioPCMBuffer!, time:AVAudioTime!) -> Void in
            NSLog("tapped")

    }
    )

答案 3 :(得分:2)

好主题

嗨布罗德

在您的主题中,我找到了我的解决方案。这里是类似的主题Generate AVAudioPCMBuffer with AVAudioRecorder

参见讲座Wwdc 2014 502 - AVAudioEngine in Practice capture microphone =>在20分钟内使用抽头代码创建缓冲区=>在21.50。

这里是swift 3代码

@IBAction func button01Pressed(_ sender: Any) {

    let inputNode = audioEngine.inputNode
    let bus = 0
    inputNode?.installTap(onBus: bus, bufferSize: 2048, format: inputNode?.inputFormat(forBus: bus)) {
        (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in

            var theLength = Int(buffer.frameLength)
            print("theLength = \(theLength)")

            var samplesAsDoubles:[Double] = []
            for i in 0 ..< Int(buffer.frameLength)
            {
                var theSample = Double((buffer.floatChannelData?.pointee[i])!)
                samplesAsDoubles.append( theSample )
            }

            print("samplesAsDoubles.count = \(samplesAsDoubles.count)")

    }

    audioEngine.prepare()
    try! audioEngine.start()

}

停止音频

func stopAudio()
    {

        let inputNode = audioEngine.inputNode
        let bus = 0
        inputNode?.removeTap(onBus: bus)
        self.audioEngine.stop()

    }