我正在使用AudioUnit
同时进行播放和录制。优选设置是采样率= 48kHz,缓冲持续时间= 0.02
这是播放和录制的渲染回调:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *microphone = (__bridge IosAudioController *)inRefCon;
// render audio into buffer
OSStatus result = AudioUnitRender(microphone.audioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
microphone.tempBuffer);
checkStatus(result);
// kAudioUnitErr_InvalidPropertyValue
// notify delegate of new buffer list to process
if ([microphone.dataSource respondsToSelector:@selector(microphone:hasBufferList:withBufferSize:withNumberOfChannels:)])
{
[microphone.dataSource microphone:microphone
hasBufferList:microphone.tempBuffer
withBufferSize:inNumberFrames
withNumberOfChannels:microphone.destinationFormat.mChannelsPerFrame];
}
return result;
}
/**
This callback is called when the audioUnit needs new data to play through the
speakers. If you don't have any, just don't write anything in the buffers
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
IosAudioController *output = (__bridge IosAudioController *)inRefCon;
//
// Try to ask the data source for audio data to fill out the output's
// buffer list
//
if( [output.dataSource respondsToSelector:@selector(outputShouldUseCircularBuffer:)] ){
TPCircularBuffer *circularBuffer = [output.dataSource outputShouldUseCircularBuffer:output];
if( !circularBuffer ){
// SInt32 *left = ioData->mBuffers[0].mData;
// SInt32 *right = ioData->mBuffers[1].mData;
// for(int i = 0; i < inNumberFrames; i++ ){
// left[ i ] = 0.0f;
// right[ i ] = 0.0f;
// }
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
return noErr;
};
/**
Thank you Michael Tyson (A Tasty Pixel) for writing the TPCircularBuffer, you are amazing!
*/
// Get the available bytes in the circular buffer
int32_t availableBytes;
void *buffer = TPCircularBufferTail(circularBuffer,&availableBytes);
int32_t amount = 0;
// float floatNumber = availableBytes * 0.25 / 48;
// float speakerNumber = ioData->mBuffers[0].mDataByteSize * 0.25 / 48;
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer abuffer = ioData->mBuffers[i];
// Ideally we'd have all the bytes to be copied, but compare it against the available bytes (get min)
amount = MIN(abuffer.mDataByteSize,availableBytes);
// copy buffer to audio buffer which gets played after function return
memcpy(abuffer.mData, buffer, amount);
// set data size
abuffer.mDataByteSize = amount;
}
// Consume those bytes ( this will internally push the head of the circular buffer )
TPCircularBufferConsume(circularBuffer,amount);
}
else
{
//
// Silence if there is nothing to output
//
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}
_tempBuffer
配置了4096
个框架。
以下是我取消分配audioUnit的方法。 注意 ,由于VoiceProcessingIO
单位可能不的错误,如果您启动,停止并再次启动它,我每次都需要处理和初始化它。这是一个已知问题并发布在此处,但我忘记了链接。
if (_tempBuffer != NULL) {
for(unsigned i = 0; i < _tempBuffer->mNumberBuffers; i++)
{
free(_tempBuffer->mBuffers[i].mData);
}
free(_tempBuffer);
}
AudioComponentInstanceDispose(_audioUnit);
此配置适用于6,6 +及更早版本的设备。但是6s出现了问题(可能是6s +)。有时候,(那些错误确实非常烦人。我讨厌它。对我来说,它在20次测试中发生了6-7次),来自IOUnit
的传入和传出数据仍然存在,但根本没有声音。
它似乎永远不会在第一次测试中发生,所以我猜这可能是IOUnit
的内存问题,我仍然不知道如何解决这个问题。
任何建议将不胜感激。
更新
我忘了展示如何配置AudioUnit
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
checkStatus(status);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Enable IO for playback
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
kOutputBus,
&flag,
sizeof(flag));
checkStatus(status);
// Apply format
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&_destinationFormat,
sizeof(self.destinationFormat));
checkStatus(status);
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Set output callback
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
kOutputBus,
&callbackStruct,
sizeof(callbackStruct));
checkStatus(status);
// Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
flag = 0;
status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
kInputBus,
&flag,
sizeof(flag));
[self configureMicrophoneBufferList];
// Initialise
status = AudioUnitInitialize(_audioUnit);
答案 0 :(得分:1)
3件事可能是问题:
对于静默(在下溢期间),您可能希望尝试使用0的inNumberFrames填充缓冲区,而不是不修改它们。
在音频回调中,Apple DTS不推荐使用任何 Objective C消息(您的respondsToSelector:call)。
在音频处理确实停止之前,您不应释放缓冲区或调用AudioComponentInstanceDispose。由于音频单元在另一个实时线程中运行,因此它们不会(获取线程或CPU时间)真正停止,直到您的应用程序进行停止音频呼叫后的一段时间。我会等几秒钟,当然不会在延迟时间之后调用(重新)初始化或(重新)启动。