Apple CoreAudio - 为什么AUGraphStop()需要25ms才能完成?

时间:2011-03-24 19:32:03

标签: iphone macos core-audio

我们的音频应用程序正在使用包含混音器单元,转换器单元和输出单元的AUGraphs。这是一个实时应用程序,所以性能是一个高度关注。

已经标记了一个问题,其中AUGraphStop()在主线程上完成需要25毫秒,而我们的探查器显示它花费这个时间休眠。谁能解释为什么会这样?它是在等待下一个零交叉点,还是等待再渲染一个缓冲区?

我尝试了几种解决方法,包括在尝试停止之前发送一个静默渲染帧(并设置kAudioUnitRenderAction_OutputIsSilence标志),并在kAudioUnitRenderAction_PostRender通知回调中调用AUGraphStop()(这有不同的结果,并且在读取它之后)似乎不是推荐的方法。)

我已经尝试过降低每个切片的帧数,但似乎没有什么区别。我还尝试删除除输出节点之外的所有音频单元,试图将问题缩小到特定单位,但AUGraphStop()的成本仍然是25ms。

以下是我们如何初始化图表:

//Configure converter/mixer/output unit descriptors
AudioComponentDescription OutputUnitDesc = { kAudioUnitType_Output ,kAudioUnitSubType_DefaultOutput, kAudioUnitManufacturer_Apple };
AudioComponentDescription MixerUnitDesc = { kAudioUnitType_Mixer, kAudioUnitSubType_MatrixMixer, kAudioUnitManufacturer_Apple };
AudioComponentDescription ConverterUnitDesc = { kAudioUnitType_FormatConverter, kAudioUnitSubType_AUConverter, kAudioUnitManufacturer_Apple };

//Create graph
CA_ERR(NewAUGraph(&mAudioGraph));

//Add converter/mixer/output nodes
CA_ERR(AUGraphAddNode(mAudioGraph, &OutputUnitDesc, &mOutputNode));
CA_ERR(AUGraphAddNode(mAudioGraph, &MixerUnitDesc, &mMixerNode));
CA_ERR(AUGraphAddNode(mAudioGraph, &ConverterUnitDesc, &mConverterNode));

//Connect nodes
CA_ERR(AUGraphConnectNodeInput(mAudioGraph, mConverterNode, 0, mMixerNode, 0));
CA_ERR(AUGraphConnectNodeInput(mAudioGraph, mMixerNode, 0, mOutputNode, 0));

//Open the graph (instantiates the units)
CA_ERR(AUGraphOpen(mAudioGraph));

//Get the created units
CA_ERR(AUGraphNodeInfo(mAudioGraph, mOutputNode, NULL, &mOutputUnit));
CA_ERR(AUGraphNodeInfo(mAudioGraph, mMixerNode, NULL, &mMixerUnit));
CA_ERR(AUGraphNodeInfo(mAudioGraph, mConverterNode, NULL, &mConverterUnit));

//Setup stream format description
mStreamDesc.mFormatID = kAudioFormatLinearPCM;
mStreamDesc.mFormatFlags = kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
mStreamDesc.mChannelsPerFrame = Header->nChannels;
mStreamDesc.mSampleRate = (Float64)Header->nSamplesPerSec;
mStreamDesc.mBitsPerChannel = Header->wBitsPerSample;
mStreamDesc.mBytesPerFrame = (Header->wBitsPerSample >> 3) * Header->nChannels;
mStreamDesc.mFramesPerPacket = 1;
mStreamDesc.mBytesPerPacket = mStreamDesc.mBytesPerFrame * mStreamDesc.mFramesPerPacket;
mStreamDesc.mReserved = 0;

//Set data endianness according to file type - TODO: Get endianness from header
AudioSystem::FileType FileType = mSample->GetFiletype();

if(FileType == AudioSystem::WAV)
        mStreamDesc.mFormatFlags |= kAudioFormatFlagsNativeEndian;
else if(FileType == AudioSystem::OGG)
        mStreamDesc.mFormatFlags |= kLinearPCMFormatFlagIsBigEndian;

//Configure number of input/output busses for mixer unit
int NumChannelsIn = Header->nChannels;
int NumChannelsOut = (int)AudioSystem::GetOutputChannelConfig();
CA_ERR(AudioUnitSetProperty(mMixerUnit, kAudioUnitProperty_BusCount, kAudioUnitScope_Input, 0, &NumChannelsIn, sizeof(u32)));
CA_ERR(AudioUnitSetProperty(mMixerUnit, kAudioUnitProperty_BusCount, kAudioUnitScope_Output, 0, &NumChannelsOut, sizeof(u32)));

//Set render callback
AURenderCallbackStruct callback = { AudioRenderCallback, this };
CA_ERR(AUGraphSetNodeInputCallback(mAudioGraph, mConverterNode, 0, &callback));

//Set stream format to something native to CoreAudio
AudioStreamBasicDescription OutputDesc = {0};
UInt32 Size = sizeof(AudioStreamBasicDescription);
CA_ERR(AudioUnitGetProperty(mMixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &OutputDesc, &Size));

//Set num output channels
OutputDesc.mChannelsPerFrame = (int)AudioSystem::GetOutputChannelConfig();

//Set stream format
CA_ERR(AudioUnitSetProperty(mConverterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &mStreamDesc, sizeof(AudioStreamBasicDescription)));
CA_ERR(AudioUnitSetProperty(mConverterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &OutputDesc, sizeof(AudioStreamBasicDescription)));
CA_ERR(AudioUnitSetProperty(mMixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &OutputDesc, sizeof(AudioStreamBasicDescription)));
CA_ERR(AudioUnitSetProperty(mMixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &OutputDesc, sizeof(AudioStreamBasicDescription)));
CA_ERR(AudioUnitSetProperty(mOutputUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &OutputDesc, sizeof(AudioStreamBasicDescription)));

//Initialise graph
CA_ERR(AUGraphInitialize(mAudioGraph));

//Set notification callback
CA_ERR(AUGraphAddRenderNotify(mAudioGraph, AudioNotifyCallback, this));

//Set global mixer volume
CA_ERR(AudioUnitSetParameter(mMixerUnit, kMatrixMixerParam_Volume, kAudioUnitScope_Global, 0xFFFFFFFF, 1.0, 0));

//Set input channel volumes
for(int i = 0; i < Header->nChannels; i++)
{
        CA_ERR(AudioUnitSetParameter(mMixerUnit, kMatrixMixerParam_Volume, kAudioUnitScope_Input, i, 1.0, 0));
}

//Set output channel volumes
for(int i = 0; i < (int)AudioSystem::GetOutputChannelConfig(); i++)
{
        CA_ERR(AudioUnitSetParameter(mMixerUnit, kMatrixMixerParam_Volume, kAudioUnitScope_Output, i, 1.0, 0));
}

使用AUGraphStart()和AUGraphStop()启动和停止图形,没什么特别的。这是使用Shark Profiler捕获的callstack:

| | | | | | | | | + 1.3%, AudioUnitGraph::Stop(), AudioToolbox
| | | | | | | | | | + 1.3%, AudioOutputUnitStop, AudioUnit
| | | | | | | | | | | + 1.3%, CallComponentDispatch, CarbonCore
| | | | | | | | | | | | + 1.3%, DefaultOutputAUEntry, CoreAudio
| | | | | | | | | | | | | + 1.3%, AUHALEntry, CoreAudio
| | | | | | | | | | | | | | + 1.3%, usleep$UNIX2003, libSystem.B.dylib
| | | | | | | | | | | | | | | + 1.3%, nanosleep$UNIX2003, libSystem.B.dylib
| | | | | | | | | | | | | | | |   1.3%, __semwait_signal, libSystem.B.dylib

整个AUGraphStop()都在睡觉!

任何线索?

1 个答案:

答案 0 :(得分:0)

如果您已经达到每个切片的最小可配置帧数,那么AUHAL中的这25ms很可能是硬件约束的表现(访问和/或(重新)配置某些音频DAC控制所需的时间寄存器或状态,和/或某些音频放大器控制状态)由锁定此硬件访问的iOS音频驱动程序线程中的固定延迟严重处理。但这只是一个疯狂的猜测。

对于硬实时应用程序,我可能会考虑将AUGraphStop()(以及AUGraphStart())移出主UI线程,并为AU工作处理自己的线程信号量。