AudioUnit - 在Swift中控制左声道和右声道输出

时间:2017-11-24 07:57:22

标签: swift channel audiounit

我正在尝试与swift同时录制和播放。我需要分别在左声道和右声道播放。我使用AudioUnit在一个频道中成功录制和播放。但在我尝试使用两个缓冲区来控制两个通道后,它们都是静音的。以下是我设置格式的方法:

    var audioFormat = AudioStreamBasicDescription()
    audioFormat.mSampleRate = Double(sampleRate)
    audioFormat.mFormatID = kAudioFormatLinearPCM
    audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked
    audioFormat.mChannelsPerFrame = 2
    audioFormat.mFramesPerPacket = 1
    audioFormat.mBitsPerChannel = 32
    audioFormat.mBytesPerPacket = 8
    audioFormat.mReserved = 0

这是我的输入回调

    private let inputCallback: AURenderCallback = {(
    inRefCon,
    ioActionFlags,
    inTimeStamp,
    inBusNumber,
    inNumberFrames,
    ioData) -> OSStatus in
    let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
    var status = noErr;
    var buf = UnsafeMutableRawPointer.allocate(bytes: Int(inNumberFrames * 4),
                                               alignedTo: MemoryLayout<Int8>.alignment)
    let bindptr = buf.bindMemory(to: Float.self,
                                 capacity: Int(inNumberFrames * 4))
    bindptr.initialize(to: 0)
    var buffer: AudioBuffer = AudioBuffer(mNumberChannels: 2,
                             mDataByteSize: inNumberFrames * 4,
                             mData: buf)

    memset(buffer.mData, 0, Int(buffer.mDataByteSize))
    var bufferList: AudioBufferList = AudioBufferList(mNumberBuffers: 1,
                                     mBuffers: buffer)(Int(bufferList.mBuffers.mDataByteSize))")

    status = AudioUnitRender(audioRAP.newAudioUnit!,
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             &bufferList)
   audioRAP.audioBuffers.append((bufferList.mBuffers,Int(inNumberFrames * 4)))

    return status
}

这是我的输出回调:

    private let outputCallback:AURenderCallback = {
    (inRefCon,
    ioActionFlags,
    inTimeStamp,
    inBusNumber,
    inNumberFrames,
    ioData) -> OSStatus in
    let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
    if ioData == nil{
        return noErr
    }
    ioData!.pointee.mNumberBuffers = 2
    var bufferCount = ioData!.pointee.mNumberBuffers

        var tempBuffer = audioRAP.audioBuffers[0]

        var monoSamples = [Float]()
        let ptr1 = tempBuffer.0.mData?.assumingMemoryBound(to: Float.self)
        monoSamples.removeAll()
        monoSamples.append(contentsOf: UnsafeBufferPointer(start: ptr1, count: Int(inNumberFrames)))

        let abl = UnsafeMutableAudioBufferListPointer(ioData)
        let bufferLeft = abl![0]
        let bufferRight = abl![1]
        let pointerLeft: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferLeft)
        let pointerRight: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferRight)

        for frame in 0..<inNumberFrames {
            let pointerIndex = pointerLeft.startIndex.advanced(by: Int(frame))
            pointerLeft[pointerIndex] = monoSamples[Int(frame)]
        }
        for frame in 0..<inNumberFrames {
            let pointerIndex = pointerRight.startIndex.advanced(by: Int(frame))
            pointerRight[pointerIndex] = monoSamples[Int(frame)]
        }

        tempBuffer.0.mData?.deallocate(bytes:tempBuffer.1, alignedTo: MemoryLayout<Int8>.alignment)
        audioRAP.audioBuffers.removeFirst()
    return noErr
}

以下是audiobuffer的声明:

    private var audioBuffers = [(AudioBuffer, Int)]()

我是否错过了输出或输入部分的内容?任何帮助都会非常感激!

1 个答案:

答案 0 :(得分:0)

第一个大问题是你的代码在内部进行内存分配  音频回调。 Apple文档明确指出,不应在音频上下文中完成内存管理,同步甚至对象消息传递。在音频回调中,您可能希望坚持只对预先分配的缓冲区进行音频样本的数据复制。其他所有内容(尤其是缓冲区创建和释放)都应该在音频回调之外完成。