OSStatus错误-50(无效参数)AudioQueueNewInput在iOS上录制音频

时间:2015-05-23 13:20:06

标签: ios objective-c core-audio

我一直在网上拖网多年,试图找到导致这个错误的原因但是我被卡住了。我一直在关注使用音频服务录制音频的Apple Developer文档,无论我做什么,我都会一直收到这个错误。

我可以使用AVAudioRecorder将音频录制成任何格式,但我的最终游戏是从输入数据中获取一个规范化的浮点数组,以便对其应用FFT(对于noob短语,我很抱歉)音频节目的新手。)

这是我的代码:

- (void)beginRecording
{
    // Initialise session
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
    [[AVAudioSession sharedInstance] setActive:YES error:nil];

    state.dataFormat.mFormatID = kAudioFormatLinearPCM;
    state.dataFormat.mSampleRate = 8000.0f;
    state.dataFormat.mChannelsPerFrame = 1;
    state.dataFormat.mBitsPerChannel = 16;
    state.dataFormat.mBytesPerPacket = state.dataFormat.mChannelsPerFrame * sizeof(SInt16);
    state.dataFormat.mFramesPerPacket = 1;

    //AudioFileTypeID fileID = kAudioFileAIFFType;

    state.dataFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;

    OSStatus err = AudioQueueNewInput(&state.dataFormat, handleInputBuffer, &state, CFRunLoopGetMain(), kCFRunLoopCommonModes, 0, &state.queue);
    printf("%i", err); // this is always -50 i.e. invalid parameters error

    deriveBufferSize(state.queue, state.dataFormat, 0.5, &state.bufferByteState);

    for (int i = 0; i < kNumberOfBuffers; i++) {
        AudioQueueAllocateBuffer(state.queue, state.bufferByteState, &state.buffers[i]);
        AudioQueueEnqueueBuffer(state.queue, state.buffers[i], 0, NULL);
    }

    state.currentPacket = 0;
    state.isRunning = YES;

    AudioQueueStart(state.queue, NULL);
}

- (void)endRecording
{
    AudioQueueStop(state.queue, YES);
    state.isRunning = NO;

    AudioQueueDispose(state.queue, YES);

    // Close the audio file here...
}

#pragma mark - CoreAudio

// Core Audio Callback Function
static void handleInputBuffer(void *agData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {

    AQRecorderState *state = (AQRecorderState *)agData;

    if (inNumPackets == 0 && state->dataFormat.mBytesPerPacket != 0) {
        inNumPackets = inBuffer->mAudioDataByteSize / state->dataFormat.mBytesPerPacket;
    }

    printf("Called");

    /*
    if (AudioFileWritePackets(state->audioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, state->currentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
        state->currentPacket += inNumPackets;
    }
     */

    if (state->isRunning) {
        AudioQueueEnqueueBuffer(state->queue, inBuffer, 0, NULL);
    }
}

void deriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription ABSDescription, Float64 secs, UInt32 *outBufferSize) {

    static const int maxBufferSize = 0x50000;

    int maxPacketSize = ABSDescription.mBytesPerPacket;
    if (maxPacketSize == 0) {
        UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
        AudioQueueGetProperty(audioQueue, kAudioConverterPropertyMaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
    }

    Float64 numBytesForTime = ABSDescription.mSampleRate * maxPacketSize * secs;
    UInt32 x = (numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
    *outBufferSize = x;
}

如果有人知道这里发生了什么,我将非常感激。 Here is the apple docs for the error

1 个答案:

答案 0 :(得分:2)

You are getting a -50 (kAudio_ParamError) because you haven't initialised AudioStreamBasicDescription's mBytesPerFrame field:

asbd.mBytesPerFrame = asbd.mFramesPerPacket*asbd.mBytesPerPacket;

where asbd is short for state.dataFormat. In your case mBytesPerFrame = 2.

I also wouldn't specify the kLinearPCMFormatFlagIsBigEndian, let the recorder return you native byte order samples.