从AudioInputIOProc创建CMSampleBufferRef

时间:2018-03-06 01:48:03

标签: c++ objective-c macos audio core-audio

我有一个AudioInputIOProc,我从AudioBufferList获得了AudioBufferList。我需要将此CMSampleBufferRef转换为- (void)handleAudioSamples:(const AudioBufferList*)samples numSamples:(UInt32)numSamples hostTime:(UInt64)hostTime { // Create a CMSampleBufferRef from the list of samples, which we'll own AudioStreamBasicDescription monoStreamFormat; memset(&monoStreamFormat, 0, sizeof(monoStreamFormat)); monoStreamFormat.mSampleRate = 44100; monoStreamFormat.mFormatID = kAudioFormatMPEG4AAC; monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved; monoStreamFormat.mBytesPerPacket = 4; monoStreamFormat.mFramesPerPacket = 1; monoStreamFormat.mBytesPerFrame = 4; monoStreamFormat.mChannelsPerFrame = 2; monoStreamFormat.mBitsPerChannel = 16; CMFormatDescriptionRef format = NULL; OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format); if (status != noErr) { // really shouldn't happen return; } mach_timebase_info_data_t tinfo; mach_timebase_info(&tinfo); UInt64 _hostTimeToNSFactor = (double)tinfo.numer / tinfo.denom; uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor); CMTime presentationTime = CMTimeMake(timeNS, 1000000000); CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid }; CMSampleBufferRef sampleBuffer = NULL; status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer); if (status != noErr) { // couldn't create the sample buffer NSLog(@"Failed to create sample buffer"); CFRelease(format); return; } // add the samples to the buffer status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, samples); if (status != noErr) { NSLog(@"Failed to add samples to sample buffer"); CFRelease(sampleBuffer); CFRelease(format); NSLog(@"Error status code: %d", status); return; } [self addAudioFrame:sampleBuffer]; NSLog(@"Original sample buf size: %ld for %d samples from %d buffers, first buffer has size %d", CMSampleBufferGetTotalSampleSize(sampleBuffer), numSamples, samples->mNumberBuffers, samples->mBuffers[0].mDataByteSize); NSLog(@"Original sample buf has %ld samples", CMSampleBufferGetNumSamples(sampleBuffer)); }

这是我到目前为止编写的代码:

OSStatus AudioTee::InputIOProc(AudioDeviceID inDevice, const AudioTimeStamp *inNow, const AudioBufferList *inInputData, const AudioTimeStamp *inInputTime, AudioBufferList *outOutputData, const AudioTimeStamp *inOutputTime, void *inClientData)

现在,我不确定如何根据AudioInputIOProc的此函数定义计算numSamples:

CMSampleBufferError_RequiredParameterMissing

此定义存在于WavTap的AudioTee.cpp文件中。

当我尝试拨打-12731时,我收到的错误是CMSampleBufferSetDataBufferFromAudioBufferList错误,错误代码为Channels: 2, Sample Rate: 44100, Precision: 32-bit, Sample Encoding: 32-bit Signed Integer PCM, Endian Type: little, Reverse Nibbles: no, Reverse Bits: no

更新

为了澄清这个问题,以下是我从AudioDeviceIOProc获得的音频数据的格式:

AudioBufferList

我得到的CMSampleBufferRef *包含我需要转换为AVAssetWriterInput *的所有音频数据(30秒的视频),并将这些样本缓冲区添加到视频中(这是30秒长)通过> hlthgrpq1 <- brfss2013 %>% select(genhlth, X_ageg5yr) 写入磁盘。

1 个答案:

答案 0 :(得分:3)

有三件事情看错:

  1. 您声明格式ID为kAudioFormatMPEG4AAC,但将其配置为LPCM。所以试试

    monoStreamFormat.mFormatID = kAudioFormatLinearPCM;

    当配置为立体声时,您也可以将格式称为“mono”。

  2. 为什么使用可能会在音频演示时间戳中留下空白的mach_timebase_info?请改用样本计数:

    CMTime presentationTime = CMTimeMake(numSamplesProcessed, 44100);

  3. 您的CMSampleTimingInfo看起来不对,而且您没有使用presentationTime。您可以将缓冲区的持续时间设置为1个样本长,当它可以是numSamples时,将其显示时间设置为零,这可能是不正确的。这样的事情会更有意义:

    CMSampleTimingInfo timing = { CMTimeMake(numSamples, 44100), presentationTime, kCMTimeInvalid };

  4. 还有一些问题:

    您的AudioBufferList是否有预期的2 AudioBuffers? 你有这个可运行的版本吗?

    P.S。我自己也对此感到内疚,但在音频开发中分配内存是considered harmful