AVCapture通过TPCircularBuffer输出到音频单元回调

时间:2014-06-01 10:09:33

标签: objective-c c macos core-audio circular-buffer

我正在构建一个AUGraph,并尝试通过AVCaptureAudioDataOutput委托方法从输入设备获取音频。

使用AVCaptureSession是here解释的问题的结果。 我成功地设法通过CARingbuffer使用此方法构建音频播放,如 Learning Core Audio 一书中所述。 但是,从CARingbuffer获取数据意味着提供有效的采样时间,当我停止AVCaptureSession时,AVCaptureOutput的采样时间和单位输入回调不再同步。 所以,我现在正在尝试使用Michael Tyson的 TPCircularBuffer ,这看起来很棒,根据我所读到的。但是,即使我发现了一些例子,我也无法从中获得一些音频(或者只是裂缝​​)。

我的图表看起来像这样:

AVCaptureSession -> callback -> AUConverter -> ... -> HALOutput

这是我的AVCaptureOutput方法的代码

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *sampleBufferASBD = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);

if (kAudioFormatLinearPCM != sampleBufferASBD->mFormatID) {

    NSLog(@"Bad format or bogus ASBD!");
    return;

}

if ((sampleBufferASBD->mChannelsPerFrame != _audioStreamDescription.mChannelsPerFrame) || (sampleBufferASBD->mSampleRate != _audioStreamDescription.mSampleRate)) {

    _audioStreamDescription = *sampleBufferASBD;
    NSLog(@"sample input format changed");

}




CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
                                                        NULL,
                                                        _currentInputAudioBufferList,
                                                        CAAudioBufferList::CalculateByteSize(_audioStreamDescription.mChannelsPerFrame),
                                                        kCFAllocatorSystemDefault,
                                                        kCFAllocatorSystemDefault,
                                                        kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                        &_blockBufferOut);


TPCircularBufferProduceBytes(&_circularBuffer, _currentInputAudioBufferList->mBuffers[0].mData, _currentInputAudioBufferList->mBuffers[0].mDataByteSize);

渲染回调:

OSStatus PushCurrentInputBufferIntoAudioUnit(void inRefCon,
                                             AudioUnitRenderActionFlags *   ioActionFlags,
                                             const AudioTimeStamp *         inTimeStamp,
                                             UInt32                         inBusNumber,
                                             UInt32                         inNumberFrames,
                                             AudioBufferList *              ioData)
{

ozAVHardwareInput *hardWareInput = (ozAVHardwareInput *)inRefCon;
TPCircularBuffer circularBuffer = [hardWareInput circularBuffer];

Float32 *targetBuffer = (Float32 *)ioData->mBuffers[0].mData;

int32_t availableBytes;
TPCircularBufferTail(&circularBuffer, &availableBytes);
UInt32 dataSize = ioData->mBuffers[0].mDataByteSize;

if (availableBytes > ozAudioDataSizeForSeconds(3.)) {

    // There is too much audio data to play -> clear buffer & mute output
    TPCircularBufferClear(&circularBuffer);

    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
        memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);

} else if (availableBytes > ozAudioDataSizeForSeconds(0.5)) {

    // SHOULD PLAY
    Float32 *cbuffer = (Float32 *)TPCircularBufferTail(&circularBuffer, &availableBytes);
    int32_t min = MIN(dataSize, availableBytes);

    memcpy(targetBuffer, cbuffer, min);
    TPCircularBufferConsume(&circularBuffer, min);
    ioData->mBuffers[0].mDataByteSize = min;

} else {

    // No data to play -> mute output
    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
        memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}

return noErr;

}

TPCIrcularBuffer使用AudioBufferList但没有任何输出,或者有时只会出现裂缝。

我做错了什么?

1 个答案:

答案 0 :(得分:0)

音频单元渲染回调应始终返回inNumberFrames样本。检查您的回调返回的数据量。