我正在尝试录制混音器单元输出产生的声音。
目前,我的代码基于apple MixerHost iOS app演示:调音台节点连接到音频图上的远程IO节点。< / p>
我尝试在混音器输出上的远程IO节点输入上设置输入回调。
我做错了但我找不到错误。
以下是代码。这是在多声道混音器单元设置之后完成的:
UInt32 flag = 1;
// Enable IO for playback
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, // Output bus
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}
/* can't do that because *** AudioUnitSetProperty EnableIO error: -1073752493 00000000
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
0, // Output bus
&flag,
sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}
*/
然后创建一个流格式:
// I/O stream format
iOStreamFormat.mSampleRate = 44100.0;
iOStreamFormat.mFormatID = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket = 1;
iOStreamFormat.mChannelsPerFrame = 1;
iOStreamFormat.mBitsPerChannel = 16;
iOStreamFormat.mBytesPerPacket = 2;
iOStreamFormat.mBytesPerFrame = 2;
[self printASBD: iOStreamFormat];
然后影响格式并指定采样率:
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output,
1, // Input bus
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,
0, // Output bus
&iOStreamFormat,
sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}
// SampleRate I/O
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Input,
0, // Output
&graphSampleRate,
sizeof (graphSampleRate));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty (set I/O unit input stream format)" withStatus: result]; return;}
然后,我尝试设置渲染回调。
解决方案1&gt;&gt;&gt;我的录音回调从未被称为
effectState.rioUnit = iOUnit;
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &recordingCallback;
renderCallbackStruct.inputProcRefCon = &effectState;
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input,
0, // Output bus
&renderCallbackStruct,
sizeof (renderCallbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty SetRenderCallback" withStatus: result]; return;}
解决方案2&gt;&gt;&gt;我的应用程序在启动时崩溃
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &recordingCallback;
renderCallbackStruct.inputProcRefCon = &effectState;
result = AUGraphSetNodeInputCallback (processingGraph, iONode,
0, // Output bus
&renderCallbackStruct);
if (noErr != result) {[self printErrorMessage: @"AUGraphSetNodeInputCallback (I/O unit input callback bus 0)" withStatus: result]; return;}
如果有人有想法......
EDIT解决方案3(感谢arlo anwser)&gt;&gt;现在存在格式问题
AudioStreamBasicDescription dstFormat = {0};
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output,
1,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input,
0,
&stereoStreamFormat,
sizeof(stereoStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}
AudioUnitAddRenderNotify(
iOUnit,
&recordingCallback,
&effectState
);
和文件设置:
if (noErr != result) {[self printErrorMessage: @"AUGraphInitialize" withStatus: result]; return;}
// On initialise le fichier audio
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[[NSString alloc] initWithFormat: @"%@/output.caf", documentsDirectory] autorelease];
NSLog(@">>> %@", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &effectState.audioFileRef);
CFRelease(destinationURL);
NSAssert(setupErr == noErr, @"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(effectState.audioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
NSAssert(setupErr == noErr, @"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(effectState.audioFileRef, 0, NULL);
NSAssert(setupErr == noErr, @"Couldn't initialize write buffers for audio file");
录音回调:
static OSStatus recordingCallback (void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
if (*ioActionFlags == kAudioUnitRenderAction_PostRender && inBusNumber == 0)
{
EffectState *effectState = (EffectState *)inRefCon;
ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, ioData);
}
return noErr;
}
输出文件output.caf :)中缺少某些内容。 我完全迷失了应用的格式。
答案 0 :(得分:15)
我认为您不需要在I / O单元上启用输入。我还会评论您在I / O单元上执行的格式和采样率配置,直到您的回调运行,因为不匹配或不支持的格式可能会阻止音频单元链接在一起。
要添加回调,请尝试以下方法:
AudioUnitAddRenderNotify(
iOUnit,
&recordingCallback,
self
);
显然其他方法将替换节点连接,但此方法不会 - 所以即使您添加了回调,您的音频设备也可以保持连接。
回调运行后,如果发现缓冲区中没有数据(ioData),请将此代码包装在回调代码中:
if (*ioActionFlags == kAudioUnitRenderAction_PostRender) {
// your code
}
这是必需的,因为以这种方式添加的回调在音频单元呈现其音频之前和之后运行,但您只想在呈现后运行代码。
回调运行后,下一步是确定它接收的音频格式并正确处理。尝试将此添加到您的回调中:
SInt16 *dataLeftChannel = (SInt16 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
NSLog(@"sample %lu: %d", frameNumber, dataLeftChannel[frameNumber]);
}
这会让你的应用程序变得太慢,以至于它可能会阻止任何音频实际播放,但你应该能够运行它足够长的时间来查看样本的样子。如果回调接收16位音频,则样本应该是-32000和32000之间的正整数或负整数。如果样本在看起来正常的数字和更小的数字之间交替,请在回调中尝试此代码:
SInt32 *dataLeftChannel = (SInt32 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
NSLog(@"sample %lu: %ld", frameNumber, dataLeftChannel[frameNumber]);
}
这应该显示完整的8.24样本。
如果您可以使用回调接收的格式保存数据,那么您应该拥有所需的内容。如果您需要以不同的格式保存它,您应该能够转换远程I / O音频单元中的格式......但是当它连接到多声道混音器单元时我haven't been able to figure out how to do that。或者,您可以使用Audio Converter Services转换数据。首先,定义输入和输出格式:
AudioStreamBasicDescription monoCanonicalFormat;
size_t bytesPerSample = sizeof (AudioUnitSampleType);
monoCanonicalFormat.mFormatID = kAudioFormatLinearPCM;
monoCanonicalFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
monoCanonicalFormat.mBytesPerPacket = bytesPerSample;
monoCanonicalFormat.mFramesPerPacket = 1;
monoCanonicalFormat.mBytesPerFrame = bytesPerSample;
monoCanonicalFormat.mChannelsPerFrame = 1;
monoCanonicalFormat.mBitsPerChannel = 8 * bytesPerSample;
monoCanonicalFormat.mSampleRate = graphSampleRate;
AudioStreamBasicDescription mono16Format;
bytesPerSample = sizeof (SInt16);
mono16Format.mFormatID = kAudioFormatLinearPCM;
mono16Format.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
mono16Format.mChannelsPerFrame = 1;
mono16Format.mSampleRate = graphSampleRate;
mono16Format.mBitsPerChannel = 16;
mono16Format.mFramesPerPacket = 1;
mono16Format.mBytesPerPacket = 2;
mono16Format.mBytesPerFrame = 2;
然后在回调之外的某处定义一个转换器,并创建一个临时缓冲区来处理转换期间的数据:
AudioConverterRef formatConverterCanonicalTo16;
@property AudioConverterRef formatConverterCanonicalTo16;
@synthesize AudioConverterRef;
AudioConverterNew(
&monoCanonicalFormat,
&mono16Format,
&formatConverterCanonicalTo16
);
SInt16 *data16;
@property (readwrite) SInt16 *data16;
@synthesize data16;
data16 = malloc(sizeof(SInt16) * 4096);
然后在保存数据之前将其添加到回调中:
UInt32 dataSizeCanonical = ioData->mBuffers[0].mDataByteSize;
SInt32 *dataCanonical = (SInt32 *)ioData->mBuffers[0].mData;
UInt32 dataSize16 = dataSizeCanonical;
AudioConverterConvertBuffer(
effectState->formatConverterCanonicalTo16,
dataSizeCanonical,
dataCanonical,
&dataSize16,
effectState->data16
);
然后您可以保存16位格式的数据16,这可能是您希望保存在文件中的内容。它将与规范数据更兼容并且一半大。
当你完成后,你可以清理几件事:
AudioConverterDispose(formatConverterCanonicalTo16);
free(data16);