我正在编写一个iOS应用程序,用于捕获来自麦克风的音频,使用高通滤波器对其进行过滤,然后通过扬声器播放。
当我在iPhone 4S上运行它时,我在渲染回调函数上调用AudioUnitRender
时出现-50 OSStatus错误,但它在模拟器上运行正常。
我正在使用AUGraph
,其中RemoteIO
单位,HighPassFilter
效果单位和AUConverter
单位可在HPF和HPF之间制作ASBD输出匹配。转换器AudioUnit
实例名为converterUnit
。
这是代码。
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription converterOutputASBD;
UInt32 converterOutputASBDSize = sizeof(converterOutputASBD);
AudioUnitGetProperty([THIS converterUnit], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &converterOutputASBD, &converterOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * converterOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = converterOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus result = AudioUnitRender([THIS converterUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
...
}
我认为-50错误意味着其中一个参数是错误的。唯一可能错误的参数是[THIS converterUnit]
和&bufferList
,因为所有其他参数都作为参数传递给我。我检查了converterUnit实例并正确分配和初始化(更重要的是,如果这是问题,它也不会在模拟器上运行)。要检查的唯一参数是bufferList
。到目前为止,我从调试中得出的结论是,RemoteIO的输出元素的输入ASBD和inNumberFrames
在手机和模拟器上都是不同的。但是,我认为对我而言并没有改变,因为我根据AudioBuffer buffer
调用产生的ASBD为AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...)
创建和分配内存。
任何帮助都会非常感激,我在这里绝望......
你们摇滚。
干杯。
更新
这是音频控制器类的定义:
@interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
AudioUnit converterUnit;
}
@property (readonly, nonatomic) AudioUnit mEffects;
@property (readonly, nonatomic) AudioUnit ioUnit;
@property (readonly, nonatomic) AudioUnit converterUnit;
@property (nonatomic) float* volumenPromedio;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
@end
,这是AUGraph的初始化代码(在AudioController.mm
中定义):
- (void)initializeAUGraph
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredHardwareSampleRate: kGraphSampleRate
error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord
error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AUNode converterNode;
// effects component
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// stream format converter component
AudioComponentDescription converter_desc;
converter_desc.componentType = kAudioUnitType_FormatConverter;
converter_desc.componentSubType = kAudioUnitSubType_AUConverter;
converter_desc.componentFlags = 0;
converter_desc.componentFlagsMask = 0;
converter_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
// manage connections in the graph
// Connect the io unit node's input element's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
// Connect the effects node's output to the converter node's input
result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, converterNode, 0);
// open the graph
result = AUGraphOpen(mGraph);
// Get references to the audio units
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterUnit);
// Enable input on remote io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
AudioStreamBasicDescription fxOutputASBD;
// get fx audio unit's output ASBD...
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &sizeOfASBD);
// ...and set it to the converter audio unit's input
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxOutputASBD, sizeof(fxOutputASBD));
AudioStreamBasicDescription ioUnitsOutputElementInputASBD;
// now get io audio unit's output element's input ASBD...
result = AudioUnitGetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioUnitsOutputElementInputASBD, &sizeOfASBD);
// ...set the sample rate...
ioUnitsOutputElementInputASBD.mSampleRate = 44100.0;
// ...and set it to the converter audio unit's output
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioUnitsOutputElementInputASBD, sizeof(ioUnitsOutputElementInputASBD));
// initialize graph
result = AUGraphInitialize(mGraph);
}
我使用渲染回调函数(而不是使用AUGraphConnectNodeInput
方法)在转换器的输出和远程io单元的输出元素的输入之间建立连接的原因是因为我需要对样本进行一些计算经过高通滤波器处理后。渲染回调使我有机会在AudioUnitRender
调用之后立即查看样本缓冲区,并在那里进行所述计算。
更新2:
通过调试,我发现设备和模拟器上的远程IO输出总线输入ASBD存在差异。它应该没有区别(我根据来自先前AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...)
调用的数据分配和初始化AudioBufferList),但这是我在设备和模拟器中看到的唯一不同的东西。
这是设备上的远程IO输出总线'输入ASBD:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
,这是在模拟器上:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 12
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 16
UInt32 mReserved 0