如何从AudioQueueBufferRef获取CMSampleBufferRef

时间:2013-11-26 08:55:40

标签: ios iphone ffmpeg avfoundation

我正在使用专为iPhone进行直播的私人图书馆。 在每次记录每个帧时,它都会调用delegate函数

void MyAQInputCallback(void *inUserData, 
                              AudioQueueRef inQueue,
                              AudioQueueBufferRef inBuffer,
                              const AudioTimeStamp *inStartTime,
                              UInt32 inNumPackets,
                              const AudioStreamPacketDescription *inPacketDesc);

现在我可以像往常一样将这个inBuffer添加到我的AVAssetWriterInput

[self.audioWriterInput appendSampleBuffer:sampleBuffer];

我想可能会以某种方式将AudioQueueBufferRef转换为CMSampleBufferRef

谢谢。

1 个答案:

答案 0 :(得分:2)

我不认为你两年后仍在寻找解决方案,但为了防止有人处于类似情况并发现这个问题(正如我所做的那样),这是我的解决方案。

My Audio Queue回调函数调用下面的appendAudioBuffer函数,传递AudioQueueBufferRef及其长度(mAudioDataByteSize)。

void appendAudioBuffer(void* pBuffer, long pLength)
{      
    // CMSampleBuffers require a CMBlockBuffer to hold the media data; we
    // create a blockBuffer here from the AudioQueueBuffer's data.

    CMBlockBufferRef blockBuffer;
    OSStatus status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                 pBuffer,
                                                 pLength,
                                                 kCFAllocatorNull,
                                                 NULL,
                                                 0,
                                                 pLength,
                                                 kCMBlockBufferAssureMemoryNowFlag,
                                                 &blockBuffer);

    // Timestamp of current sample
    CFAbsoluteTime currentTime = CFAbsoluteTimeGetCurrent();
    CFTimeInterval elapsedTime = currentTime - mStartTime;
    CMTime timeStamp = CMTimeMake(elapsedTime * mTimeScale, mTimeScale);

    // Number of samples in the buffer
    long nSamples = pLength / mWaveRecorder->audioFormat()->mBytesPerFrame;

    CMSampleBufferRef sampleBuffer;
    OSStatus err = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
                                                                   blockBuffer,
                                                                   true,
                                                                   NULL,
                                                                   NULL,
                                                                   mAudioFormatDescription, 
                                                                   nSamples,
                                                                   timeStamp, 
                                                                   NULL,
                                                                   &sampleBuffer);
    // Add the audio sample to the asset writer input
    if ([mAudioWriterInput isReadyForMoreMediaData]) {
        if(![mAudioWriterInput appendSampleBuffer:sampleBuffer])
            // print an error
    }
    else 
        // either do nothing and just print an error, or queue the CMSampleBuffer 
        // somewhere and add it later, when the AVAssetWriterInput is ready


    CFRelease(sampleBuffer);
    CFRelease(blockBuffer);

}

请注意,当我拨打appendAudioBuffer时,声音不会被压缩;音频格式被指定为LPCM(这就是我不使用数据包描述符的原因,因为LPCM没有)。 AVAssetWriterInput处理压缩。 我最初试图将AAC数据传递给AVAssetWriter,但这导致了太多的复杂性,我无法让它工作。