上周末,我遇到了一个学习如何在iOS上编程音频合成的绊脚石。我已经在iOS上开发了好几年了,但我刚刚进入音频合成方面。现在,我只是编写演示应用程序来帮助我学习这些概念。我目前能够在音频单元的回放渲染器中构建和堆叠正弦波而没有任何问题。但是,我想了解渲染器中发生了什么,这样我就可以在每个左右声道中渲染2个独立的正弦波。目前,我假设在我的init音频部分中我需要进行以下更改:
来自:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
致:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = kSampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 4;
audioFormat.mBytesPerFrame = 4;
但是,渲染器对我来说有些希望。我一直在处理我能找到的任何教程或示例代码。我可以使单声道信号的给定上下文工作,但我无法使渲染器生成立体声信号。我想要的只是左声道中的一个不同频率和右声道中的不同频率 - 但老实说,我并不了解渲染器以使其正常工作。我已经尝试将memcpy函数导入mBuffers [0]和mbuffers [1],但这会导致应用程序崩溃。我的渲染在下面(它目前包含堆叠的正弦波,但对于立体声示例,我可以在每个通道中使用一个设定频率的波)。
#define kOutputBus 0
#define kSampleRate 44100
//44100.0f
#define kWaveform (M_PI * 2.0f / kSampleRate)
OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
HomeViewController *me = (HomeViewController *)inRefCon;
static int phase = 1;
static int phase1 = 1;
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++) {
int samples = ioData->mBuffers[i].mDataByteSize / sizeof(SInt16);
SInt16 values[samples];
float waves;
float volume=.5;
float wave1;
for(int j = 0; j < samples; j++) {
waves = 0;
wave1 = 0;
MyManager *sharedManager = [MyManager sharedManager];
wave1 = sin(kWaveform * sharedManager.globalFr1 * phase1)*sharedManager.globalVol1;
if (0.000001f > wave1) {
[me setFr1:sharedManager.globalFr1];
phase1 = 0;
//NSLog(@"switch");
}
waves += wave1;
waves += sin(kWaveform * sharedManager.globalFr2 * phase)*sharedManager.globalVol2;
waves += sin(kWaveform * sharedManager.globalFr3 * phase)*sharedManager.globalVol3;
waves += sin(kWaveform * sharedManager.globalFr4 * phase)*sharedManager.globalVol4;
waves += sin(kWaveform * sharedManager.globalFr5 * phase)*sharedManager.globalVol5;
waves += sin(kWaveform * sharedManager.globalFr6 * phase)*sharedManager.globalVol6;
waves += sin(kWaveform * sharedManager.globalFr7 * phase)*sharedManager.globalVol7;
waves += sin(kWaveform * sharedManager.globalFr8 * phase)*sharedManager.globalVol8;
waves += sin(kWaveform * sharedManager.globalFr9 * phase)*sharedManager.globalVol9;
waves *= 32767 / 9; // <--------- make sure to divide by how many waves you're stacking
values[j] = (SInt16)waves;
values[j] += values[j]<<16;
phase++;
phase1++;
}
memcpy(ioData->mBuffers[i].mData, values, samples * sizeof(SInt16));
}
return noErr;
}
提前感谢您的帮助!
答案 0 :(得分:3)
我有同样的问题想要将音调独立地指向左右声道。根据Matt Gallagher现在的标准An iOS tone generator (an introduction to AudioUnits)来描述是最容易的。
要做的第一个更改是在streamFormat.mChannelsPerFrame = 2;
方法中设置(跟随@jwkerr)streamFormat.mChannelsPerFrame = 1;
(而不是createToneUnit
)。一旦完成并且每个帧中有两个通道/缓冲区,您需要在RenderTone()
中单独填充左右缓冲区:
// Set the left and right buffers independently
Float32 tmp;
Float32 *buffer0 = (Float32 *)ioData->mBuffers[0].mData;
Float32 *buffer1 = (Float32 *)ioData->mBuffers[1].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
tmp = sin(theta) * amplitude;
if (channelLR[0]) buffer0[frame] = tmp; else buffer0[frame] = 0;
if (channelLR[1]) buffer1[frame] = tmp; else buffer1[frame] = 0;
theta += theta_increment;
if (theta > 2.0 * M_PI) theta -= 2.0 * M_PI;
}
当然channelLR[2]
是一个bool
数组,您设置的元素用于指示相应的通道是否可听见。请注意,程序需要将静音通道的帧明确设置为零,否则会得到一些有趣的音调。