我今天有很多有趣的东西,关于 iOS&音频单元并找到了大量有用的资源(包括SO)。
首先,我对某些内容感到困惑:是否真的有必要使用调音台创建音频图来录制播放的声音通过应用程序?
或者用ObjectAL(或更简单的AVAudioPlayer调用)播放声音并使用录音回调在正确的总线上创建单个远程io单元就足够了吗?
第二次,更具编程性的问题! 由于我对 Audio Units 概念不熟悉,我尝试使apple Mixer Host project能够记录所产生的混音。显然,我尝试使用Michael Tyson RemoteIO post。
我的回调函数上有一个EXC_BAD_ACCESS:
static OSStatus recordingCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)
EffectState *effectState = (EffectState *)inRefCon;
AudioUnit rioUnit = effectState->rioUnit;
OSStatus status;
// BELOW I GET THE ERROR
status = AudioUnitRender(rioUnit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
bufferList);
if (noErr != status) { NSLog(@"AudioUnitRender error"); return noErr;}
// Now, we have the samples we just read sitting in buffers in bufferList
//ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, bufferList);
return noErr;
}
在使用我在 MixerHostAudio.h
中执行的回调功能之前typedef struct {
AudioUnit rioUnit;
ExtAudioFileRef audioFileRef;
} EffectState;
并在界面中创建:
AudioUnit iOUnit;
EffectState effectState;
AudioStreamBasicDescription iOStreamFormat;
...
@property AudioUnit iOUnit;
@property (readwrite) AudioStreamBasicDescription iOStreamFormat;
然后在实现文件 MixerHostAudio.h :
中#define kOutputBus 0
#define kInputBus 1
...
@synthesize iOUnit; // the Remote IO unit
...
result = AUGraphNodeInfo (
processingGraph,
iONode,
NULL,
&iOUnit
);
if (noErr != result) {[self printErrorMessage: @"AUGraphNodeInfo" withStatus: result]; return;}
// Enable IO for recording
UInt32 flag = 1;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
kInputBus,
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
// Describe format
iOStreamFormat.mSampleRate = 44100.00;
iOStreamFormat.mFormatID = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket = 1;
iOStreamFormat.mChannelsPerFrame = 1;
iOStreamFormat.mBitsPerChannel = 16;
iOStreamFormat.mBytesPerPacket = 2;
iOStreamFormat.mBytesPerFrame = 2;
// Apply format
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
kInputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
kOutputBus,
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
effectState.rioUnit = iOUnit;
// Set input callback ----> RECORDING
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = self;
result = AudioUnitSetProperty(iOUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
kInputBus,
&callbackStruct,
sizeof(callbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
但我不知道什么是错的,不知道如何挖掘。 注意: EffectState 结构存在,因为我还尝试集成BioAudio project从缓冲区写入文件的能力。
和第三次,我想知道我的iPhone应用程序录制的声音(即不包括麦克风)是否更容易做些什么?
答案 0 :(得分:0)
我自己找到。我忘了这样链:
callbackStruct.inputProcRefCon = &effectState;
这是代码部分。现在我又有了概念问题......