iOS - 生成并播放无限简单的音频(正弦波)

时间:2013-01-22 19:22:53

标签: ios objective-c core-audio

我正在寻找一个非常简单的iOS应用程序,它带有一个启动和停止音频信号的按钮。信号只是一个正弦波,它将在整个播放过程中检查我的模型(音量的实例变量),并相应地改变音量。

我的困难与任务的不确定性有关。我理解如何构建表格,填充数据,响应按钮按下等等;然而,当谈到让某些东西无限期地继续(在这种情况下,声音)时,我有点卡住了!任何指针都会很棒!

感谢阅读。

1 个答案:

答案 0 :(得分:15)

这是一个简单的应用程序,它将按需生成频率。你还没有指定是做iOS还是OSX,所以我已经去了OSX,因为它稍微简单一些(没有搞乱音频会话类别)。如果你需要iOS,你可以通过查看音频会话类别基础知识并交换RemoteIO音频单元的默认输出音频单元来找出丢失的位。

请注意,这样做的目的纯粹是为了演示一些核心音频/音频单元的基础知识。如果你想开始变得比这更复杂,你可能想要查看AUGraph API(也是为了提供一个干净的例子,我没有做任何错误检查。总是< / strong>在处理Core Audio时进行错误检查。

您需要将AudioToolboxAudioUnit框架添加到项目中才能使用此代码。

#import <AudioToolbox/AudioToolbox.h>

@interface SWAppDelegate : NSObject <NSApplicationDelegate>
{
    AudioUnit outputUnit;
    double renderPhase;
}
@end

@implementation SWAppDelegate

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification
{
//  First, we need to establish which Audio Unit we want.

//  We start with its description, which is:
    AudioComponentDescription outputUnitDescription = {
        .componentType         = kAudioUnitType_Output,
        .componentSubType      = kAudioUnitSubType_DefaultOutput,
        .componentManufacturer = kAudioUnitManufacturer_Apple
    };

//  Next, we get the first (and only) component corresponding to that description
    AudioComponent outputComponent = AudioComponentFindNext(NULL, &outputUnitDescription);

//  Now we can create an instance of that component, which will create an
//  instance of the Audio Unit we're looking for (the default output)
    AudioComponentInstanceNew(outputComponent, &outputUnit);
    AudioUnitInitialize(outputUnit);

//  Next we'll tell the output unit what format our generated audio will
//  be in. Generally speaking, you'll want to stick to sane formats, since
//  the output unit won't accept every single possible stream format.
//  Here, we're specifying floating point samples with a sample rate of
//  44100 Hz in mono (i.e. 1 channel)
    AudioStreamBasicDescription ASBD = {
        .mSampleRate       = 44100,
        .mFormatID         = kAudioFormatLinearPCM,
        .mFormatFlags      = kAudioFormatFlagsNativeFloatPacked,
        .mChannelsPerFrame = 1,
        .mFramesPerPacket  = 1,
        .mBitsPerChannel   = sizeof(Float32) * 8,
        .mBytesPerPacket   = sizeof(Float32),
        .mBytesPerFrame    = sizeof(Float32)
    };

    AudioUnitSetProperty(outputUnit,
                         kAudioUnitProperty_StreamFormat,
                         kAudioUnitScope_Input,
                         0,
                         &ASBD,
                         sizeof(ASBD));

//  Next step is to tell our output unit which function we'd like it
//  to call to get audio samples. We'll also pass in a context pointer,
//  which can be a pointer to anything you need to maintain state between
//  render callbacks. We only need to point to a double which represents
//  the current phase of the sine wave we're creating.
    AURenderCallbackStruct callbackInfo = {
        .inputProc       = SineWaveRenderCallback,
        .inputProcRefCon = &renderPhase
    };

    AudioUnitSetProperty(outputUnit,
                         kAudioUnitProperty_SetRenderCallback,
                         kAudioUnitScope_Global,
                         0,
                         &callbackInfo,
                         sizeof(callbackInfo));

//  Here we're telling the output unit to start requesting audio samples
//  from our render callback. This is the line of code that starts actually
//  sending audio to your speakers.
    AudioOutputUnitStart(outputUnit);
}

// This is our render callback. It will be called very frequently for short
// buffers of audio (512 samples per call on my machine).
OSStatus SineWaveRenderCallback(void * inRefCon,
                                AudioUnitRenderActionFlags * ioActionFlags,
                                const AudioTimeStamp * inTimeStamp,
                                UInt32 inBusNumber,
                                UInt32 inNumberFrames,
                                AudioBufferList * ioData)
{
    // inRefCon is the context pointer we passed in earlier when setting the render callback
    double currentPhase = *((double *)inRefCon);
    // ioData is where we're supposed to put the audio samples we've created
    Float32 * outputBuffer = (Float32 *)ioData->mBuffers[0].mData;
    const double frequency = 440.;
    const double phaseStep = (frequency / 44100.) * (M_PI * 2.);

    for(int i = 0; i < inNumberFrames; i++) {
        outputBuffer[i] = sin(currentPhase);
        currentPhase += phaseStep;
    }

    // If we were doing stereo (or more), this would copy our sine wave samples
    // to all of the remaining channels
    for(int i = 1; i < ioData->mNumberBuffers; i++) {
        memcpy(ioData->mBuffers[i].mData, outputBuffer, ioData->mBuffers[i].mDataByteSize);
    }

    // writing the current phase back to inRefCon so we can use it on the next call
    *((double *)inRefCon) = currentPhase;
    return noErr;
}

- (void)applicationWillTerminate:(NSNotification *)notification
{
    AudioOutputUnitStop(outputUnit);
    AudioUnitUninitialize(outputUnit);
    AudioComponentInstanceDispose(outputUnit);
}

@end

您可以随意拨打AudioOutputUnitStart()AudioOutputUnitStop()来开始/停止制作音频。如果要动态更改频率,可以传入指向包含renderPhase double和另一个表示所需频率的struct的指针。

在渲染回调中要小心。它是从实时线程调用的(而不是与主运行循环相同的线程)。渲染回调受到一些相当严格的时间要求的限制,这意味着你的回调中不应该做很多事情,例如:

  • 分配内存
  • 等待互斥锁
  • 从磁盘上的文件中读取
  • Objective-C消息传递(是的,认真的。)

请注意,这不是执行此操作的唯一方法。我只是用这种方式演示了它,因为你已经标记了这个核心音频。如果您不需要更改频率,只需将AVAudioPlayer与包含正弦波的预制声音文件一起使用即可。

还有Novocaine,它隐藏了你的许多冗长。您还可以查看Audio Queue API,它与我编写的Core Audio示例非常相似,但是您可以将它与硬件分离得更多(例如,它对渲染回调中的行为方式不太严格)。