音频录制AudioQueueStart缓冲区从未填充

时间:2017-08-14 15:45:38

标签: ios objective-c iphone cocoa-touch aac

我正在使用AudioQueueStart开始在iOS设备上录制,我希望所有录制数据都通过缓冲区流式传输给我,以便我可以处理它们并将它们发送到服务器。

基本功能很好但是在我的BufferFilled函数中我通常得到<每次通话都有10个字节的数据。这感觉非常低效。特别是因为我试图将缓冲区大小设置为16384个btyes(参见startRecording方法的开头)

如何在调用BufferFilled之前让它更多地填充缓冲区?或者我是否需要在发送到服务器之前进行第二阶段缓冲以实现我想要的目标?

OSStatus BufferFilled(void *aqData, SInt64 inPosition, UInt32 requestCount, const void *inBuffer, UInt32 *actualCount) {
    AQRecorderState *pAqData = (AQRecorderState*)aqData;

    NSData *audioData = [NSData dataWithBytes:inBuffer length:requestCount];

    *actualCount = inBuffer + requestCount;

    //audioData is ususally < 10 bytes, sometimes 100 bytes but never close to 16384 bytes    


    return 0;
}

void HandleInputBuffer(void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {
    AQRecorderState *pAqData = (AQRecorderState*)aqData;

    if (inNumPackets == 0 && pAqData->mDataFormat.mBytesPerPacket != 0)
        inNumPackets = inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;

    if(AudioFileWritePackets(pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
        pAqData->mCurrentPacket += inNumPackets;
    }

    if (pAqData->mIsRunning == 0)
        return;

    OSStatus error = AudioQueueEnqueueBuffer(pAqData->mQueue, inBuffer, 0, NULL);
}


void DeriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription *ASBDescription, Float64 seconds, UInt32 *outBufferSize) {
    static const int maxBufferSize = 0x50000;

    int maxPacketSize = ASBDescription->mBytesPerPacket;
    if (maxPacketSize == 0) {
        UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
        AudioQueueGetProperty(audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
    }

    Float64 numBytesForTime = ASBDescription->mSampleRate * maxPacketSize * seconds;
    *outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
}

OSStatus SetMagicCookieForFile (AudioQueueRef inQueue, AudioFileID   inFile) {
    OSStatus result = noErr;
    UInt32 cookieSize;

    if (AudioQueueGetPropertySize (inQueue, kAudioQueueProperty_MagicCookie, &cookieSize) == noErr) {
        char* magicCookie =
        (char *) malloc (cookieSize);
        if (AudioQueueGetProperty (inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize) == noErr)
            result = AudioFileSetProperty (inFile, kAudioFilePropertyMagicCookieData, cookieSize, magicCookie);
        free(magicCookie);
    }
    return result;
}


- (void)startRecording {

    aqData.mDataFormat.mFormatID         = kAudioFormatMPEG4AAC;
    aqData.mDataFormat.mSampleRate       = 22050.0;
    aqData.mDataFormat.mChannelsPerFrame = 1;
    aqData.mDataFormat.mBitsPerChannel   = 0;
    aqData.mDataFormat.mBytesPerPacket   = 0;
    aqData.mDataFormat.mBytesPerFrame    = 0;
    aqData.mDataFormat.mFramesPerPacket  = 1024;
    aqData.mDataFormat.mFormatFlags      = kMPEG4Object_AAC_Main;
    AudioFileTypeID fileType             = kAudioFileAAC_ADTSType;
    aqData.bufferByteSize = 16384;


    UInt32 defaultToSpeaker = TRUE;
    AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(defaultToSpeaker), &defaultToSpeaker);

    OSStatus status = AudioQueueNewInput(&aqData.mDataFormat, HandleInputBuffer, &aqData, NULL, kCFRunLoopCommonModes, 0, &aqData.mQueue);
    UInt32 dataFormatSize = sizeof (aqData.mDataFormat);      

    status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_StreamDescription, &aqData.mDataFormat, &dataFormatSize);
    status = AudioFileInitializeWithCallbacks(&aqData, nil, BufferFilled, nil, nil, fileType, &aqData.mDataFormat, 0, &aqData.mAudioFile);

    for (int i = 0; i < kNumberBuffers; ++i) {
        status = AudioQueueAllocateBuffer (aqData.mQueue, aqData.bufferByteSize, &aqData.mBuffers[i]);
        status = AudioQueueEnqueueBuffer (aqData.mQueue, aqData.mBuffers[i], 0, NULL);
    }

    aqData.mCurrentPacket = 0;                           
    aqData.mIsRunning = true;                            

    status = AudioQueueStart(aqData.mQueue, NULL);
}

更新:我已经记录了我收到的数据并且非常有趣,它几乎看起来像“数据包”的一半是某种标题,一半是声音数据。我可以假设这只是iOS上AAC编码的工作原理吗?它将标头写入一个缓冲区,然后将数据写入下一个缓冲区,依此类推。并且它永远不会想要每个数据块大约170-180字节,这就是为什么它忽略了我的大缓冲区?

1 个答案:

答案 0 :(得分:0)

我最终解决了这个问题。事实证明,是的,iOS上的编码会产生大大小小的数据。我自己使用NSMutableData添加了第二阶段缓冲区,它工作得很好。