在Objective C中使用CoreAudio混合MIDI音频通道

时间:2012-04-12 21:57:59

标签: objective-c core-audio mixer coremidi

我正在尝试编写一个程序,它将接收来自不同乐器的MIDI信号。目前,MIDI信号被发送到采样器单元(kAudioUnitSubType_Sampler),每个单元具有相关的声音效果 - 由声音字体提供。单独我可以让乐器正确播放,但我需要能够混合多种乐器。

起初我想过为每个音轨创建一个单独的AUGraph,但我想这会占用大量内存并且不是最好的解决方案。

从那时起,我一直在努力让音频混音器工作(kAudioUnitSubType_AU3DMixerEmbedded)。在设置了其他音频单元(我已经测试过)之后,我使用以下代码设置了调音台:

cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_AU3DMixerEmbedded;

result = AUGraphAddNode (_processingGraph, &cd, &mixerNode);
NSCAssert (result == noErr, @"Unable to add the Output unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);

然后我打开了图表:

result = AUGraphOpen (_processingGraph);
NSCAssert (result == noErr, @"Unable to open the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);

接下来我设置了流描述:

AudioStreamBasicDescription desc;

desc.mSampleRate = 44100; // set sample rate
desc.mFormatID = kAudioFormatLinearPCM;
desc.mFormatFlags      = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
desc.mBitsPerChannel = sizeof(AudioSampleType) * 8; // AudioSampleType == 16 bit signed ints
desc.mChannelsPerFrame = 1;
desc.mFramesPerPacket = 1;
desc.mBytesPerFrame = ( desc.mBitsPerChannel / 8 ) * desc.mChannelsPerFrame;
desc.mBytesPerPacket = desc.mBytesPerFrame * desc.mFramesPerPacket;

最后,我将所有节点连接在一起:

result = AUGraphConnectNodeInput (_processingGraph, mixerNode, 0, ioNode, 0);
NSCAssert (result == noErr, @"Unable to interconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);

result = AUGraphConnectNodeInput(_processingGraph, samplerNode, 0, mixerNode, 0); 

NSCAssert (result == noErr, @"Couldn't connect mixer output(0) to outputNode (0). Error code: %d '%.4s'", (int) result, (const char *)&result);

result = AUGraphConnectNodeInput(_processingGraph, drumSamplerNode, 0, mixerNode, 1);

NSCAssert (result == noErr, @"Couldn't connect speech synth unit output (0) to mixer input (1). Error code: %d '%.4s'", (int) result, (const char *)&result);

我将调音台连接到io-unit。第一个采样器在混频器上输入总线0,第二个采样器在采样器上输入第二个输入总线。这是CAShow的副本:

  Member Nodes:
node 1: 'aumu' 'samp' 'appl', instance 0x8882210 O  
node 2: 'aumu' 'samp' 'appl', instance 0x88819d0 O  
node 3: 'auou' 'rioc' 'appl', instance 0x8883510 O  
node 4: 'aumx' '3dem' 'appl', instance 0x8a5d5d0 O  
  Connections:
node   4 bus   0 => node   3 bus   0  [ 1 ch,  44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
node   1 bus   0 => node   4 bus   0  [ 1 ch,  44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
node   2 bus   0 => node   4 bus   1  [ 1 ch,  44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer]
    CurrentState:
mLastUpdateError=0, eventsToProcess=F, isRunning=F

当我初始化图表时,我得到一个OSStatus -10868。我见过使用环形缓冲区和渲染回调的例子。这是在使用MIDI时还是仅在从麦克风等现场设备获取输入时才需要?我真的想知道我的一般方法是否可行(即它应该工作正常,但我在某处犯了一些小错误)或者我需要花几个小时阅读有关环缓冲区的信息。

非常感谢任何帮助!

1 个答案:

答案 0 :(得分:2)

如果您只想将msgs发送到AUSamplers,则不需要环缓冲区或renderCallbacks。你只需发一个消息:

OSStatus result = noErr;
UInt32 noteNum = 60;
UInt32 onVelocity = 100;
UInt32 noteCommand =    kMIDIMessage_NoteOn << 4 | 0;
result = MusicDeviceMIDIEvent (sampler.samplerUnit, noteCommand, noteNum, onVelocity, 0);

之前我见过错误代码,但我忘了这意味着什么。您在设置时需要遵循一个订单。查看audioGraph(docs)(source)。我建议你设置你的RemoteIO&amp;首先是混音器 - 在查看CAShow()输出时更有意义。

此外,默认情况下,调音台只有一个输入。如果您需要更多,则需要指定它:

// set the bus count
UInt32 numBuses = busCount;
result = AudioUnitSetProperty(mixerUnit, 
                              kAudioUnitProperty_ElementCount, 
                              kAudioUnitScope_Input, 
                              0, 
                              &numBuses, 
                              sizeof(numBuses));

if (noErr != result) {[self printErrorMessage: @"Error setting Bus Count" withStatus: result]; return;}