AudioKit - 如何从麦克风获取实时floatChannelData?

时间:2018-06-11 20:06:50

标签: ios swift audiokit

我是Audiokit的新手,我正试图对来自麦克风的输入音频进行实时数字信号处理。

我知道我想要的数据是在AKAudioFile的FloatChannelData中,但如果我想实时获取这些数据怎么办?我目前正在使用AKMicrophone,AKFrequencyTracker,AKNodeOutputPlot,AKBooster,我正在绘制跟踪器的幅度数据。但是,该数据与音频信号不同(如您所知,它是RMS)。有什么方法可以从麦克风中获取信号的Float数据吗?甚至从AKNodeOutputPlot?我只需要读取权限。

AKSettings.audioInputEnabled = true
mic = AKMicrophone()
plot = AKNodeOutputPlot(mic, frame: audioInputPlot.bounds)
tracker = AKFrequencyTracker.init(mic)
silence = AKBooster(tracker,gain:0)
AudioKit.output = silence
AudioKit.start()

推荐here的创建者:

  

AKNodeOutputPlot工作,它是一个短文件。你基本上只是点击节点并抓取数据。

如果有一个情节实例(AKNodeOutputPlot),麦克风(AKMicrophone)并希望将这些值输出到标签,我的viewController中的效果如何?

2 个答案:

答案 0 :(得分:1)

点击要从中获取数据的节点。我在上面的引文中使用了AKNodeOutputPlot,因为它非常简单,只是将该数据用作绘图的输入,但是您可以获取数据并对其执行任何操作。在此代码中(来自AKNodeOutputPlot):

internal func setupNode(_ input: AKNode?) {
    if !isConnected {
        input?.avAudioNode.installTap(
            onBus: 0,
            bufferSize: bufferSize,
            format: nil) { [weak self] (buffer, _) in

                guard let strongSelf = self else {
                    AKLog("Unable to create strong reference to self")
                    return
                }
                buffer.frameLength = strongSelf.bufferSize
                let offset = Int(buffer.frameCapacity - buffer.frameLength)
                if let tail = buffer.floatChannelData?[0] {
                    strongSelf.updateBuffer(&tail[offset], withBufferSize: strongSelf.bufferSize)
                }
        }
    }
    isConnected = true
}

您可以实时获取缓冲区数据。在这里,我们只是将它发送到“updateBuffer”,在那里绘制它,而不是绘制你做其他事情。

答案 1 :(得分:0)

要完成Aurelius Prochazka的答案,请执行以下操作:

要录制流经节点的音频,您需要在其上粘贴一个磁带。磁带只是一个封闭,每次缓冲区可用时都会调用它。

以下是示例代码,您可以在自己的课程中重用:

var mic = AKMicrophone()

func initMicrophone() {

  // Facultative, allow to set the sampling rate of the microphone
  AKSettings.sampleRate = 44100

  // Link the microphone note to the output of AudioKit with a volume of 0.
  AudioKit.output = AKBooster(mic, gain:0)

  // Start AudioKit engine
  try! AudioKit.start()

  // Add a tape to the microphone
  mic?.avAudioNode.installTap(
      onBus: audioBus, bufferSize: 4096, format: nil // I choose a buffer size of 4096
  ) { [weak self] (buffer, _) in //self is now a weak reference, to prevent retain cycles

      // We try to create a strong reference to self, and name it strongSelf
      guard let strongSelf = self else {
        print("Recorder: Unable to create strong reference to self #1")
        return
      }

      // We look at the buffer if it contains data
      buffer.frameLength = strongSelf.bufferSize
      let offset = Int(buffer.frameCapacity - buffer.frameLength)
      if let tail = buffer.floatChannelData?[0] {
        // We convert the content of the buffer to a swift array
        let samples = Array(UnsafeBufferPointer(start: &tail[offset], count: 4096))
        strongSelf.myFunctionHandlingData(samples)
      }
  }

  func myFunctionhandlingData(data: [Float]) {
    // ...
  }

如果需要使用DispatchQueue或其他同步机制来在不同线程之间对该数据进行交互,请小心。 就我而言,我确实使用:

DispatchQueue.main.async { [weak self]  in
  guard let strongSelf = self else {
    print("Recorder: Unable to create strong reference to self #2")
    return
  }
  strongSelf.myFunctionHandlingData(samples)
}

以便我的函数在主线程中运行。