AVAudioEngine实时频率调制

时间:2018-02-21 17:16:01

标签: ios swift avfoundation avaudioengine

我想实时修改传入信号并将其发送到iOS设备扬声器。我已经读过AVAudioEngine可以用于这样的任务。但无法找到我想要实现的文档或示例。

出于测试目的,我已经完成了:

audioEngine = AVAudioEngine()

let unitEffect = AVAudioUnitReverb()
unitEffect.wetDryMix = 50

audioEngine.attach(unitEffect)

audioEngine.connect(audioEngine.inputNode, to: unitEffect, format: nil)
audioEngine.connect(unitEffect, to: audioEngine.outputNode, format: nil)

audioEngine.prepare()

如果按下按钮,我就会这样做:

do {
    try audioEngine.start()
} catch {
    print(error)
}

audioEngine.stop()

混响效果应用于信号,我可以听到它有效。所以现在我想摆脱混响并且:

  1. 调制输入信号,例如反转信号,调制频率等。是否有一种可以使用的效果集合或以某种方式在数学上调制频率的可能性?
  2. 在iOS设备上启动时,我会在此处获得混响,但输出仅在顶部手机扬声器上,而不是在响亮的底部。如何改变?

1 个答案:

答案 0 :(得分:1)

这个github存储库从字面上做到了:https://github.com/dave234/AppleAudioUnit

只需从此处将BufferedAudioUnit添加到您的项目中,然后使用以下实现将其子类化:

AudioProcessingUnit.h:

#import "BufferedAudioUnit.h"

@interface AudioProcessingUnit : BufferedAudioUnit

@end

AudioProcessingUnit.m:

#import "AudioProcessingUnit.h"

@implementation AudioProcessingUnit

-(ProcessEventsBlock)processEventsBlock:(AVAudioFormat *)format {

    return ^(AudioBufferList       *inBuffer,
             AudioBufferList       *outBuffer,
             const AudioTimeStamp  *timestamp,
             AVAudioFrameCount     frameCount,
             const AURenderEvent   *realtimeEventListHead) {

        for (int i = 0; i < inBuffer->mNumberBuffers; i++) {

            float *buffer = inBuffer->mBuffers[i].mData;
            for (int j = 0; j < inBuffer->mBuffers[i].mDataByteSize; j++) {
                buffer[j] = /*process it here*/;
            }

            memcpy(outBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mData, inBuffer->mBuffers[i].mDataByteSize);
        }
    };
}

@end

然后,在您的AVAudioEngine设置中:

let audioComponentDescription = AudioComponentDescription(
            componentType: kAudioUnitType_Effect,
            componentSubType: kAudioUnitSubType_VoiceProcessingIO,
            componentManufacturer: 0x0,
            componentFlags: 0,
            componentFlagsMask: 0
        );

        AUAudioUnit.registerSubclass(
            AudioProcessingUnit.self,
            as: audioComponentDescription,
            name: "AudioProcessingUnit",
            version: 1
        )

        AVAudioUnit.instantiate(
            with: audioComponentDescription,
            options: .init(rawValue: 0)
        ) { (audioUnit, error) in
            guard let audioUnit = audioUnit else {
                NSLog("Audio unit is NULL")
                return
            }

            let formatHardwareInput = self.engine.inputNode.inputFormat(forBus: 0)

            self.engine.attach(audioUnit)
            self.engine.connect(
                self.engine.inputNode,
                to: audioUnit,
                format: formatHardwareInput
            )
            self.engine.connect(
                audioUnit,
                to: self.engine.outputNode,
                format: formatHardwareInput
            )
        }