使用AVFoundation将AAC音频和h.264视频流复用到mp4

时间:2018-05-02 19:43:22

标签: ios mp4 aac avassetwriter mux

对于OSX和IOS,我有实时编码视频(h.264)和音频(AAC)数据流,我希望能够将这些数据合并为mp4。

我正在使用AVAssetWriter来执行多路复用。

我有视频工作,但我的音频仍然听起来像混乱的静态。这就是我现在正在尝试的内容(为简洁起见,在此处跳过一些错误检查):

我初始化作者:

   NSURL *url = [NSURL fileURLWithPath:mContext->filename];
   NSError* err = nil;
   mContext->writer = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeMPEG4 error:&err];

我初始化音频输入:

     NSDictionary* settings;
     AudioChannelLayout acl;
     bzero(&acl, sizeof(acl));
     acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
     settings = nil; // set output to nil so it becomes a pass-through

     CMAudioFormatDescriptionRef audioFormatDesc = nil;
     {
        AudioStreamBasicDescription absd = {0};
        absd.mSampleRate = mParameters.audioSampleRate; //known sample rate
        absd.mFormatID = kAudioFormatMPEG4AAC;
        absd.mFormatFlags = kMPEG4Object_AAC_Main;
        CMAudioFormatDescriptionCreate(NULL, &absd, 0, NULL, 0, NULL, NULL, &audioFormatDesc);
     }

     mContext->aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:settings sourceFormatHint:audioFormatDesc];
     mContext->aacWriterInput.expectsMediaDataInRealTime = YES;
     [mContext->writer addInput:mContext->aacWriterInput];

启动作者:

   [mContext->writer startWriting];
   [mContext->writer startSessionAtSourceTime:kCMTimeZero];

然后,我有一个回调,其中我收到一个带有时间戳(毫秒)的数据包,以及一个std::vector<uint8_t>,其数据包含1024个压缩样本。我确保isReadyForMoreMediaData是真的。然后,如果这是我们第一次收到回调,我设置了CMAudioFormatDescription:

   OSStatus error = 0;

   AudioStreamBasicDescription streamDesc = {0};
   streamDesc.mSampleRate = mParameters.audioSampleRate;
   streamDesc.mFormatID = kAudioFormatMPEG4AAC;
   streamDesc.mFormatFlags = kMPEG4Object_AAC_Main;
   streamDesc.mChannelsPerFrame = 2;  // always stereo for us
   streamDesc.mBitsPerChannel = 0;
   streamDesc.mBytesPerFrame = 0;
   streamDesc.mFramesPerPacket = 1024; // Our AAC packets contain 1024 samples per frame
   streamDesc.mBytesPerPacket = 0;
   streamDesc.mReserved = 0;

   AudioChannelLayout acl;
   bzero(&acl, sizeof(acl));
   acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
   error = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &streamDesc, sizeof(acl), &acl, 0, NULL, NULL, &mContext->audioFormat);

最后,我创建一个CMSampleBufferRef并发送它:

   CMSampleBufferRef buffer = NULL;
   CMBlockBufferRef blockBuffer;
   CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, NULL, packet.data.size(), kCFAllocatorDefault, NULL, 0, packet.data.size(), kCMBlockBufferAssureMemoryNowFlag, &blockBuffer);
   CMBlockBufferReplaceDataBytes((void*)packet.data.data(), blockBuffer, 0, packet.data.size());

   CMTime duration = CMTimeMake(1024, mParameters.audioSampleRate);
   CMTime pts = CMTimeMake(packet.timestamp, 1000);
   CMSampleTimingInfo timing = {duration , pts, kCMTimeInvalid };

   size_t sampleSizeArray[1] = {packet.data.size()};

   error = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, nullptr, mContext->audioFormat, 1, 1, &timing, 1, sampleSizeArray, &buffer);       

   // First input buffer must have an appropriate kCMSampleBufferAttachmentKey_TrimDurationAtStart since the codec has encoder delay'
   if (mContext->firstAudioFrame)
   {
      CFDictionaryRef dict = NULL;
      dict = CMTimeCopyAsDictionary(CMTimeMake(1024, 44100), kCFAllocatorDefault);
      CMSetAttachment(buffer, kCMSampleBufferAttachmentKey_TrimDurationAtStart, dict, kCMAttachmentMode_ShouldNotPropagate);
      // we must trim the start time on first audio frame...
      mContext->firstAudioFrame = false;
   }

   CMSampleBufferMakeDataReady(buffer);

   BOOL ret = [mContext->aacWriterInput appendSampleBuffer:buffer];

我想我最怀疑的部分是我对CMSampleBufferCreate的调用。看来我必须传入一个样本大小数组,否则在检查我的作者状态时会立即收到此错误消息:

Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x604001e50770 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)"}}

基础错误似乎是kCMSampleBufferError_BufferHasNoSampleSizes

我确实注意到Apple的文档中有一个使用AAC数据创建缓冲区的示例: https://developer.apple.com/documentation/coremedia/1489723-cmsamplebuffercreate?language=objc

在他们的示例中,他们指定了一个long sampleSizeArray,其中包含每个样本的条目。这有必要吗?我没有这个回调的信息。在我们的Windows实现中,我们不需要这些数据。所以我尝试发送packet.data.size()作为样本大小,但这似乎不对,它肯定不会产生令人愉快的音频。

有什么想法吗?要么调整我的调用,要么调整我应该使用的不同的API来混合编码数据流。

谢谢!

1 个答案:

答案 0 :(得分:0)

如果您不想进行代码转换,请不要传递outputSetting字典。您应该在此处传递nil:     mContext-> aacWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaType音频输出设置: sourceFormatHint:audioFormatDesc];

在本文中的某处进行了解释: as per this screenshot