我正在尝试编写一个iOS应用程序,它从麦克风中捕获声音,将其传递给高通滤波器,然后对处理过的声音进行一些计算。基于Stefan Popp的MicInput(http://www.stefanpopp.de/2011/capture-iphone-microphone/),我试图在I / O音频单元的输入和输出之间放置一个效果音频单元(更具体地说,一个高通滤波器效果单元)。在设置了所述AU之后,当我在I / O AU的渲染回调中调用kAudioUnitErr_InvalidElement
时,它给出了一个10877错误(AudioUnitRender(fxAudioUnit, ...)
)。
AudioProcessingWithAudioUnitAPI.h
//
// AudioProcessingWithAudioUnitAPI.h
//
#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVAudioSession.h>
@interface AudioProcessingWithAudioUnitAPI : NSObject
@property (readonly) AudioBuffer audioBuffer;
@property (readonly) AudioComponentInstance audioUnit;
@property (readonly) AudioComponentInstance fxAudioUnit;
...
@end
AudioProcessingWithAudioUnitAPI.m
//
// AudioProcessingWithAudioUnitAPI.m
//
#import "AudioProcessingWithAudioUnitAPI.h"
@implementation AudioProcessingWithAudioUnitAPI
@synthesize isPlaying = _isPlaying;
@synthesize outputLevelDisplay = _outputLevelDisplay;
@synthesize audioBuffer = _audioBuffer;
@synthesize audioUnit = _audioUnit;
@synthesize fxAudioUnit = _fxAudioUnit;
...
#pragma mark Recording callback
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// the data gets rendered here
AudioBuffer buffer;
// a variable where we check the status
OSStatus status;
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
/**
on this point we define the number of channels, which is mono
for the iphone. the number of frames is usally 512 or 1024.
*/
buffer.mDataByteSize = inNumberFrames * 2; // sample size
buffer.mNumberChannels = 1; // one channel
buffer.mData = malloc( inNumberFrames * 2 ); // buffer size
// we put our buffer into a bufferlist array for rendering
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
在下一个AudioUnitRender
调用中,会抛出10887错误:
status = AudioUnitRender([audioProcessor fxAudioUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[audioProcessor hasError:status:__FILE__:__LINE__];
...
// process the bufferlist in the audio processor
[audioProcessor processBuffer:&bufferList];
//do some further processing
// clean up the buffer
free(bufferList.mBuffers[0].mData);
return noErr;
}
#pragma mark FX AudioUnit render callback
//This just asks for samples to the microphone (I/O AU render)
static OSStatus fxAudioUnitRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus retorno;
AudioProcessingWithAudioUnitAPI* audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*)inRefCon;
retorno = AudioUnitRender([audioProcessor audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData);
[audioProcessor hasError:retorno:__FILE__:__LINE__];
return retorno;
}
#pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessingWithAudioUnitAPI *audioProcessor = (__bridge AudioProcessingWithAudioUnitAPI*) inRefCon;
// iterate over incoming stream and copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
}
return noErr;
}
#pragma mark - objective-c class methods
-(AudioProcessingWithAudioUnitAPI*)init
{
self = [super init];
if (self) {
self.isPlaying = NO;
[self initializeAudio];
}
return self;
}
-(void)initializeAudio
{
OSStatus status;
// We define the audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output; // we want to ouput
desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
desc.componentFlags = 0; // must be zero
desc.componentFlagsMask = 0; // must be zero
desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider
// find the AU component by description
AudioComponent component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_audioUnit);
[self hasError:status:__FILE__:__LINE__];
// and now for the fx AudioUnit
desc.componentType = kAudioUnitType_Effect;
desc.componentSubType = kAudioUnitSubType_HighPassFilter;
// find the AU component by description
component = AudioComponentFindNext(NULL, &desc);
// create audio unit by component
status = AudioComponentInstanceNew(component, &_fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
// define that we want record io on the input bus
AudioUnitElement inputElement = 1;
AudioUnitElement outputElement = 0;
UInt32 flag = 1;
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Input, // scope to input
inputElement, // select input bus (1)
&flag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
UInt32 anotherFlag = 0;
// disable output (I don't want to hear back from the device)
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_EnableIO, // use io
kAudioUnitScope_Output, // scope to output
outputElement, // select output bus (0)
&anotherFlag, // set flag
sizeof(flag));
[self hasError:status:__FILE__:__LINE__];
/*
We need to specify our format on which we want to work.
We use Linear PCM cause its uncompressed and we work on raw data.
for more informations check.
We want 16 bits, 2 bytes per packet/frames at 44khz
*/
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = SAMPLE_RATE;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16; //65536
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
// set the format on the output stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
inputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
// set the format on the input stream
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
outputElement,
&audioFormat,
sizeof(audioFormat));
[self hasError:status:__FILE__:__LINE__];
/**
We need to define a callback structure which holds
a pointer to the recordingCallback and a reference to
the audio processor object
*/
AURenderCallbackStruct callbackStruct;
// set recording callback
callbackStruct.inputProc = recordingCallback; // recordingCallback pointer
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to recording callback on the input bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
/*
We do the same on the output stream to hear what is coming
from the input stream
*/
callbackStruct.inputProc = playbackCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set playbackCallback as callback on our renderer for the output bus
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
outputElement,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
callbackStruct.inputProc = fxAudioUnitRenderCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// set input callback to input AU
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
0,
&callbackStruct,
sizeof(callbackStruct));
[self hasError:status:__FILE__:__LINE__];
// reset flag to 0
flag = 0;
/*
we need to tell the audio unit to allocate the render buffer,
that we can directly write into it.
*/
status = AudioUnitSetProperty(self.audioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
inputElement,
&flag,
sizeof(flag));
status = AudioUnitSetProperty(self.fxAudioUnit,
kAudioUnitProperty_ShouldAllocateBuffer,
kAudioUnitScope_Output,
0,
&flag,
sizeof(flag));
/*
we set the number of channels to mono and allocate our block size to
1024 bytes.
*/
_audioBuffer.mNumberChannels = 1;
_audioBuffer.mDataByteSize = 512 * 2;
_audioBuffer.mData = malloc( 512 * 2 );
// Initialize the Audio Unit and cross fingers =)
status = AudioUnitInitialize(self.fxAudioUnit);
[self hasError:status:__FILE__:__LINE__];
status = AudioUnitInitialize(self.audioUnit);
[self hasError:status:__FILE__:__LINE__];
NSLog(@"Started");
}
//For now, this just copies the buffer to self.audioBuffer
-(void)processBuffer: (AudioBufferList*) audioBufferList
{
AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];
// we check here if the input data byte size has changed
if (_audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
// clear old buffer
free(self.audioBuffer.mData);
// assing new byte size and allocate them on mData
_audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
_audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
// copy incoming audio data to the audio buffer
memcpy(self.audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}
#pragma mark - Error handling
-(void)hasError:(int)statusCode:(char*)file:(int)line
{
if (statusCode) {
printf("Error Code responded %d in file %s on line %d\n", statusCode, file, line);
exit(-1);
}
}
@end
非常感谢任何帮助。
答案 0 :(得分:3)
这种类型的问题经常出现,所以我曾写过mini-tutorial on this subject。但是,本指南实际上是解决问题的方法,我现在觉得更优雅的方法是使用Novocaine framework,这在iOS上的AudioUnit设置中引起了很大的麻烦。
答案 1 :(得分:0)
我找到了一个演示代码,可能有用4 U;
DEMO网址:https://github.com/JNYJdev/AudioUnit
OR
博客:http://atastypixel.com/blog/using-remoteio-audio-unit/
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}