iPhoneOS是否支持kAudioFormatFlagIsFloat?

时间:2010-05-21 12:29:24

标签: iphone audiounit

我正在编写一个iPhone应用程序,根据Apple's recommendations使用I / O音频单元同时录制和播放音频。

我想在播放之前对录制的音频应用一些声音效果(混响等)。为了使这些效果运行良好,我需要样本是浮点数,而不是整数。通过在AudioStreamBasicDescription上设置kAudioFormatFlagIsFloat设置mFormatFlags,似乎可以实现这一点。这就是我的代码:

AudioStreamBasicDescription streamDescription;

streamDescription.mSampleRate = 44100.0;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat;
streamDescription.mBitsPerChannel = 32;
streamDescription.mBytesPerFrame = 4;
streamDescription.mBytesPerPacket = 4;
streamDescription.mChannelsPerFrame = 1;
streamDescription.mFramesPerPacket = 1;
streamDescription.mReserved = 0;

OSStatus status;

status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr)
  fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned status %ld\n", status);

status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &streamDescription, sizeof(streamDescription));
if (status != noErr)
  fprintf(stderr, "AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned status %ld\n", status);

然而,当我运行它(在运行iPhoneOS 3.1.3的iPhone 3GS上)时,我明白了:

AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input) returned error -10868
AudioUnitSetProperty (kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output) returned error -10868

( - 10868是kAudioUnitErr_FormatNotSupported

的值

我没有在Apple的文档中找到任何有价值的东西,除了recommendation坚持16位小端整数。但是,aurioTouch示例项目至少包含一些与kAudioFormatFlagIsFloat相关的支持代码。

那么,我的流描述是否不正确,或者在iPhoneOS上根本不支持kAudioFormatFlagIsFloat

5 个答案:

答案 0 :(得分:2)

据我所知,它不受支持。尽管使用AudioConverter,您可以非常容易地转换为浮点数。我实时进行这种转换(两种方式)以使用带有iOS音频的Accelerate框架。 (注意:此代码是从更模块化的代码中复制和粘贴的,因此可能会有一些小错字)

首先,您需要输入中的AudioStreamBasicDescription。说

AudioStreamBasicDescription aBasicDescription = {0};
aBasicDescription.mSampleRate       = self.samplerate;
aBasicDescription.mFormatID         = kAudioFormatLinearPCM;
aBasicDescription.mFormatFlags      = kAudioFormatFlagIsSignedInteger |     kAudioFormatFlagIsPacked;
aBasicDescription.mFramesPerPacket          = 1;
aBasicDescription.mChannelsPerFrame     = 1;
aBasicDescription.mBitsPerChannel       = 8 * sizeof(SInt16);
aBasicDescription.mBytesPerPacket       = sizeof(SInt16) * aBasicDescription.mFramesPerPacket;
aBasicDescription.mBytesPerFrame        = sizeof(SInt16) * aBasicDescription.mChannelsPerFrame

然后,为float生成相应的AudioStreamBasicDescription。

AudioStreamBasicDescription floatDesc = {0};
floatDesc.mFormatID = kAudioFormatLinearPCM;      
floatDesc.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
floatDesc.mBitsPerChannel = 8 * sizeof(float);
floatDesc.mFramesPerPacket = 1;                                          
floatDesc.mChannelsPerFrame = 1;           
floatDesc.mBytesPerPacket = sizeof(float) * floatDesc.mFramesPerPacket;                                                                            
floatDesc.mBytesPerFrame = sizeof(float) * floatDesc.mChannelsPerFrame;                                                                                   
floatDesc.mSampleRate = [controller samplerate];

制作一些缓冲区。

UInt32 intSize = inNumberFrames * sizeof(SInt16);
UInt32 floatSize = inNumberFrames * sizeof(float);
float *dataBuffer = (float *)calloc(numberOfAudioFramesIn, sizeof(float));

然后转换。 (ioData是包含int音频的AudioBufferList)

AudioConverterRef converter;
OSStatus err = noErr;
err = AudioConverterNew(&aBasicDescription, &floatDesct, &converter);
//check for error here in "real" code
err = AudioConverterConvertBuffer(converter, intSize, ioData->mBuffers[0].mData, &floatSize, dataBuffer);
//check for error here in "real" code
//do stuff to dataBuffer, which now contains floats
//convert the floats back by running the conversion the other way

答案 1 :(得分:1)

来自Core Audio文档:

kAudioFormatFlagIsFloat
  Set for floating point, clear for integer.
  Available in iPhone OS 2.0 and later.
  Declared in CoreAudioTypes.h.

我不太了解您的信息流对其正确性进行评论。

答案 2 :(得分:1)

支持。

问题是您还必须在kAudioFormatFlagIsNonInterleaved上设置mFormatFlags。如果在设置kAudioFormatFlagIsFloat时未执行此操作,则会出现格式错误。

因此,在准备AudioStreamBasicDescription

时,您希望这样做
streamDescription.mFormatFlags = kAudioFormatFlagIsFloat | 
                                 kAudioFormatFlagIsNonInterleaved;

至于为什么iOS需要这个,我不确定 - 我只是通过反复试验偶然发现它。

答案 3 :(得分:1)

我正在做与AudioUnits无关的事情,但我在iOS上使用AudioStreamBasicDescription。我可以通过指定:

来使用浮动样本
dstFormat.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked;

这本书Learning Core Audio: A Hands-on Guide to Audio Programming for Mac and iOS对此很有帮助。

答案 4 :(得分:0)

您可以通过以下 ASBD 设置获得交错浮点 RemoteIO:

// STEREO_CHANNEL = 2, defaultSampleRate = 44100
AudioStreamBasicDescription const audioDescription = {
                    .mSampleRate        = defaultSampleRate,
                    .mFormatID          = kAudioFormatLinearPCM,
                    .mFormatFlags       = kAudioFormatFlagIsFloat,
                    .mBytesPerPacket    = STEREO_CHANNEL * sizeof(float),
                    .mFramesPerPacket   = 1,
                    .mBytesPerFrame     = STEREO_CHANNEL * sizeof(float),
                    .mChannelsPerFrame  = STEREO_CHANNEL,
                    .mBitsPerChannel    = 8 * sizeof(float),
                    .mReserved          = 0
                };

这对我有用。

相关问题