低延迟输入/输出AudioQueue

时间:2015-05-03 08:27:47

标签: ios swift audiounit audioqueue novocaine

我有两个iOS AudioQueues - 一个输入直接将样本提供给一个输出。不幸的是,有一种非常明显的回声效果:(

是否可以使用AudioQueues进行低延迟音频,还是我真的需要使用AudioUnits? (我已经尝试过使用AudioUnits的Novocaine框架,这里的延迟要小得多。我也注意到这个框架似乎使用了较少的CPU资源。不幸的是,我无法在我的Swift项目中使用这个框架而不对其进行重大更改。)

以下是我的代码的一些摘录,主要在Swift中完成,除了那些需要在C中实现的回调。

final String NEW_LINE = System.getProperty("line.separator");
Scanner sc = new Scanner(new File("textfile.txt"));
List<String> tokens = new ArrayList<String>();
StringBuilder builder = new StringBuilder();
while(sc.hasNextLine()) {
    //Read the next line
    String temp = sc.nextLine();
    builder.append(temp);

    if(temp.trim().equals("")) {
        tokens.add(builder.toString() + NEW_LINE);  //Copy the gotten tokens to the list adding a new line since we read up to, not including, the new line
        builder = new StringBuilder();  //Clear the builder
    }
}
//Copy any remaining characters to the list
tokens.add(builder.toString() + NEW_LINE);

然后我的C代码将回调设置回Swift:

private let audioStreamBasicDescription = AudioStreamBasicDescription(
    mSampleRate: 16000,
    mFormatID: AudioFormatID(kAudioFormatLinearPCM),
    mFormatFlags: AudioFormatFlags(kAudioFormatFlagsNativeFloatPacked),
    mBytesPerPacket: 4,
    mFramesPerPacket: 1,
    mBytesPerFrame: 4,
    mChannelsPerFrame: 1,
    mBitsPerChannel: 32,
    mReserved: 0)

private let numberOfBuffers = 80
private let bufferSize: UInt32 = 256

private var active = false

private var inputQueue: AudioQueueRef = nil
private var outputQueue: AudioQueueRef = nil

private var inputBuffers = [AudioQueueBufferRef]()
private var outputBuffers = [AudioQueueBufferRef]()
private var headOfFreeOutputBuffers: AudioQueueBufferRef = nil

// callbacks implemented in Swift
private func audioQueueInputCallback(inputBuffer: AudioQueueBufferRef) {
    if active {
        if headOfFreeOutputBuffers != nil {
            let outputBuffer = headOfFreeOutputBuffers
            headOfFreeOutputBuffers = AudioQueueBufferRef(outputBuffer.memory.mUserData)
            outputBuffer.memory.mAudioDataByteSize = inputBuffer.memory.mAudioDataByteSize
            memcpy(outputBuffer.memory.mAudioData, inputBuffer.memory.mAudioData, Int(inputBuffer.memory.mAudioDataByteSize))
            assert(AudioQueueEnqueueBuffer(outputQueue, outputBuffer, 0, nil) == 0)
        } else {
            println(__FUNCTION__ + ": out-of-output-buffers!")
        }

        assert(AudioQueueEnqueueBuffer(inputQueue, inputBuffer, 0, nil) == 0)
    }
}

private func audioQueueOutputCallback(outputBuffer: AudioQueueBufferRef) {
    if active {
        outputBuffer.memory.mUserData = UnsafeMutablePointer<Void>(headOfFreeOutputBuffers)
        headOfFreeOutputBuffers = outputBuffer
    }
}

func start() {
    var error: NSError?
    audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: .allZeros, error: &error)
    dumpError(error, functionName: "AVAudioSessionCategoryPlayAndRecord")
    audioSession.setPreferredSampleRate(16000, error: &error)
    dumpError(error, functionName: "setPreferredSampleRate")
    audioSession.setPreferredIOBufferDuration(0.005, error: &error)
    dumpError(error, functionName: "setPreferredIOBufferDuration")

    audioSession.setActive(true, error: &error)
    dumpError(error, functionName: "setActive(true)")

    assert(active == false)
    active = true

    // cannot provide callbacks to AudioQueueNewInput/AudioQueueNewOutput from Swift and so need to interface C functions
    assert(MyAudioQueueConfigureInputQueueAndCallback(audioStreamBasicDescription, &inputQueue, audioQueueInputCallback) == 0)
    assert(MyAudioQueueConfigureOutputQueueAndCallback(audioStreamBasicDescription, &outputQueue, audioQueueOutputCallback) == 0)

    for (var i = 0; i < numberOfBuffers; i++) {
        var audioQueueBufferRef: AudioQueueBufferRef = nil
        assert(AudioQueueAllocateBuffer(inputQueue, bufferSize, &audioQueueBufferRef) == 0)
        assert(AudioQueueEnqueueBuffer(inputQueue, audioQueueBufferRef, 0, nil) == 0)
        inputBuffers.append(audioQueueBufferRef)

        assert(AudioQueueAllocateBuffer(outputQueue, bufferSize, &audioQueueBufferRef) == 0)
        outputBuffers.append(audioQueueBufferRef)

        audioQueueBufferRef.memory.mUserData = UnsafeMutablePointer<Void>(headOfFreeOutputBuffers)
        headOfFreeOutputBuffers = audioQueueBufferRef
    }

    assert(AudioQueueStart(inputQueue, nil) == 0)
    assert(AudioQueueStart(outputQueue, nil) == 0)
}

2 个答案:

答案 0 :(得分:2)

很长一段时间后,我发现使用AudioUnits而非AudioQueues的this精彩帖子。我把它移植到Swift然后简单地添加:

audioSession.setPreferredIOBufferDuration(0.005, error: &error)

答案 1 :(得分:1)

如果您正在录制麦克风的音频并在该麦克风的听力范围内播放,那么由于音频吞吐量不是即时的,您之前的一些输出会将其转换为新输入,因此回声。这种现象称为feedback

这是一个结构性问题,因此更改录制API不会有帮助(尽管更改录制/播放缓冲区大小可以控制回声延迟)。您可以播放音频,使麦克风听不到(例如根本不听,或通过耳机)或沿echo cancellation的兔洞走下去。