在AVFramework中重复的音频帧通过AVAsset创建* .mov文件

时间:2016-12-13 05:57:51

标签: audio avfoundation avassetwriter .mov avassetwriterinput

我在尝试使用AVFramework框架和AVAsset创建ProRes编码的mov文件时遇到了一些问题。

在OSX 10.10.5上,使用XCode 7,链接10.9个库。 到目前为止,我已设法创建包含视频和多个音频通道的有效ProRes文件。

(我创建了多个未压缩的48K,16位PCM音频轨道)

添加视频帧效果很好,添加音频帧效果很好,或至少在代码中成功。

然而,当我回放文件时,看起来好像音频帧在12,13,14或15帧序列中重复。

查看波形,从* .mov可以很容易地看到重复的音频......

也就是说,前13或X视频帧都包含完全相同的音频,然后再次为下一个X重复,然后一次又一次地重复等等......

视频很好,只是音频似乎在循环/重复。

无论我使用多少音频通道/音轨作为音源,我都会出现问题,我只使用了1首音轨进行了测试,并且还使用了4首和8首音轨。

它与我提供给系统的样本的格式和数量无关,即使用720p60,1080p23和1080i59都表现出相同的不正确行为。

  • 实际上,720p捕获似乎重复音频帧30或31次,而1080格式只重复音频帧12或13次,

但我肯定会向Audio encode / SampleBuffer创建过程提交不同的音频数据,因为我已经非常详细地记录了这一点(因为它未在下面的代码中显示)

我尝试了很多不同的东西来修改代码并揭露问题,但没有成功,因此我在这里问,并希望有人可以看到我的代码的问题或给我一些关于这个问题。

我使用的代码如下:

int main(int argc, const char * argv[])
{
    @autoreleasepool
    {
        NSLog(@"Hello, World!  - Welcome to the ProResCapture With Audio sample app. ");
        OSStatus status;
        AudioStreamBasicDescription audioFormat;
        CMAudioFormatDescriptionRef audioFormatDesc;

        // OK so lets include the hardware stuff first and then we can see about doing some actual capture  and compress stuff
        HARDWARE_HANDLE pHardware = sdiFactory();
        if (pHardware)
        {
            unsigned long ulUpdateType = UPD_FMT_FRAME;
            unsigned long ulFieldCount = 0;
            unsigned int numAudioChannels = 4; //8; //4;
            int numFramesToCapture = 300;

            gBFHancBuffer = (unsigned int*)myAlloc(gHANC_SIZE);

            int audioSize = 2002 * 4 * 16;
            short* pAudioSamples = (short*)new char[audioSize];
            std::vector<short*> vecOfNonInterleavedAudioSamplesPtrs;
            for (int i = 0; i < 16; i++)
            {
                vecOfNonInterleavedAudioSamplesPtrs.push_back((short*)myAlloc(2002 * sizeof(short)));
            }

            bool bVideoModeIsValid = SetupAndConfigureHardwareToCaptureIncomingVideo();

            if (bVideoModeIsValid)
            {

                gBFBytes = (BLUE_UINT32*)myAlloc(gGoldenSize);

                bool canAddVideoWriter = false;
                bool canAddAudioWriter = false;
                int nAudioSamplesWritten = 0;

                // declare the vars for our various AVAsset elements
                AVAssetWriter* assetWriter = nil;
                AVAssetWriterInput* assetWriterInputVideo = nil;
                AVAssetWriterInput* assetWriterAudioInput[16];


                AVAssetWriterInputPixelBufferAdaptor* adaptor = nil;
                NSURL* localOutputURL = nil;
                NSError* localError = nil;

                // create the file we are goijmng to be writing to
                localOutputURL = [NSURL URLWithString:@"file:///Volumes/Media/ProResAVCaptureAnyFormat.mov"];

                assetWriter = [[AVAssetWriter alloc] initWithURL: localOutputURL fileType:AVFileTypeQuickTimeMovie error:&localError];
                if (assetWriter)
                {
                    assetWriter.shouldOptimizeForNetworkUse = NO;

                    // Lets configure the Audio and Video settings for this writer...
                    {
                          // Video First.

                          // Add a video input
                          // create a dictionary with the settings we want ie. Prores capture and width and height.
                          NSMutableDictionary* videoSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
                                                                AVVideoCodecAppleProRes422, AVVideoCodecKey,
                                                                [NSNumber numberWithInt:width], AVVideoWidthKey,
                                                                [NSNumber numberWithInt:height], AVVideoHeightKey,
                                                                nil];

                          assetWriterInputVideo = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings:videoSettings];
                          adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterInputVideo
                                                                                                     sourcePixelBufferAttributes:nil];

                          canAddVideoWriter = [assetWriter canAddInput:assetWriterInputVideo];
                    }

                    { // Add a Audio AssetWriterInput

                          // Create a dictionary with the settings we want ie. Uncompressed PCM audio 16 bit little endian.
                          NSMutableDictionary* audioSettings = [NSMutableDictionary dictionaryWithObjectsAndKeys:
                                                                [NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
                                                                [NSNumber numberWithFloat:48000.0], AVSampleRateKey,
                                                                [NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
                                                                [NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
                                                                [NSNumber numberWithUnsignedInteger:1], AVNumberOfChannelsKey,
                                                                nil];

                          // OR use... FillOutASBDForLPCM(AudioStreamBasicDescription& outASBD, Float64 inSampleRate, UInt32 inChannelsPerFrame, UInt32 inValidBitsPerChannel, UInt32 inTotalBitsPerChannel, bool inIsFloat, bool inIsBigEndian, bool inIsNonInterleaved = false)
                          UInt32 inValidBitsPerChannel = 16;
                          UInt32 inTotalBitsPerChannel = 16;
                          bool inIsFloat = false;
                          bool inIsBigEndian = false;
                          UInt32 inChannelsPerTrack = 1;
                          FillOutASBDForLPCM(audioFormat, 48000.00, inChannelsPerTrack, inValidBitsPerChannel, inTotalBitsPerChannel, inIsFloat, inIsBigEndian);

                          status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
                                                                  &audioFormat,
                                                                  0,
                                                                  NULL,
                                                                  0,
                                                                  NULL,
                                                                  NULL,
                                                                  &audioFormatDesc
                                                                  );

                          for (int t = 0; t < numAudioChannels; t++)
                          {
                              assetWriterAudioInput[t] = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioSettings];
                              canAddAudioWriter = [assetWriter canAddInput:assetWriterAudioInput[t] ];

                              if (canAddAudioWriter)
                              {
                                  assetWriterAudioInput[t].expectsMediaDataInRealTime = YES; //true;
                                  [assetWriter addInput:assetWriterAudioInput[t] ];
                              }
                          }


                          CMFormatDescriptionRef myFormatDesc = assetWriterAudioInput[0].sourceFormatHint;
                          NSString* medType = [assetWriterAudioInput[0] mediaType];
                    }

                    if(canAddVideoWriter)
                    {
                          // tell the asset writer to expect media in real time.
                          assetWriterInputVideo.expectsMediaDataInRealTime = YES; //true;

                          // add the Input(s)
                          [assetWriter addInput:assetWriterInputVideo];

                          // Start writing the frames..
                          BOOL success = true;
                          success = [assetWriter startWriting];
                          CMTime startTime = CMTimeMake(0, fpsRate);
                          [assetWriter startSessionAtSourceTime:kCMTimeZero];
                          // [assetWriter startSessionAtSourceTime:startTime];

                      if (success)
                      {
                          startOurVideoCaptureProcess();

                          // **** possible enhancement is to use a pixelBufferPool to manage multiple buffers at once...
                          CVPixelBufferRef buffer = NULL;
                          int kRecordingFPS = fpsRate;
                          bool frameAdded = false;
                          unsigned int bufferID;


                          for( int i = 0; i < numFramesToCapture; i++)
                          {
                              printf("\n");

                              buffer = pixelBufferFromCard(bufferID, width, height, memFmt); // This function to get a CVBufferREf From our device, as well as getting the Audio data
                              while(!adaptor.assetWriterInput.readyForMoreMediaData)
                              {
                                    printf(" readyForMoreMediaData FAILED \n");
                              }

                              if (buffer)
                              {
                                  // Add video
                                  printf("appending Frame %d ", i);
                                  CMTime frameTime = CMTimeMake(i, kRecordingFPS);
                                  frameAdded = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
                                  if (frameAdded)
                                      printf("VideoAdded.....\n ");

                                  // Add Audio
                                  {
                                      // Do some Processing on the captured data to extract the interleaved Audio Samples for each channel
                                      struct hanc_decode_struct decode;
                                      DecodeHancFrameEx(gBFHancBuffer, decode);
                                      int nAudioSamplesCaptured = 0;
                                      if(decode.no_audio_samples > 0)
                                      {
                                          printf("completed deCodeHancEX, found %d samples \n", ( decode.no_audio_samples  / numAudioChannels) );
                                          nAudioSamplesCaptured = decode.no_audio_samples  / numAudioChannels;
                                      }

                                      CMTime audioTimeStamp = CMTimeMake(nAudioSamplesWritten, 480000); // (Samples Written) / sampleRate for audio


                                      // This function repacks the Audio from interleaved PCM data a vector of individual array of Audio data
                                      RepackDecodedHancAudio((void*)pAudioSamples, numAudioChannels, nAudioSamplesCaptured, vecOfNonInterleavedAudioSamplesPtrs);

                                      for (int t = 0; t < numAudioChannels; t++)
                                      {
                                          CMBlockBufferRef blockBuf = NULL; // ***********  MUST release these AFTER adding the samples to the assetWriter...
                                          CMSampleBufferRef cmBuf = NULL;

                                          int sizeOfSamplesInBytes = nAudioSamplesCaptured * 2;  // always 16bit memory samples...

                                          // Create sample Block buffer for adding to the audio input.
                                          status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
                                                                                      (void*)vecOfNonInterleavedAudioSamplesPtrs[t],
                                                                                      sizeOfSamplesInBytes,
                                                                                      kCFAllocatorNull,
                                                                                      NULL,
                                                                                      0,
                                                                                      sizeOfSamplesInBytes,
                                                                                      0,
                                                                                      &blockBuf);

                                          if (status != noErr)
                                                NSLog(@"CMBlockBufferCreateWithMemoryBlock error");

                                          status = CMAudioSampleBufferCreateWithPacketDescriptions(kCFAllocatorDefault,
                                                                                                   blockBuf,
                                                                                                   TRUE,
                                                                                                   0,
                                                                                                   NULL,
                                                                                                   audioFormatDesc,
                                                                                                   nAudioSamplesCaptured,
                                                                                                   audioTimeStamp,
                                                                                                   NULL,
                                                                                                   &cmBuf);
                                          if (status != noErr)
                                                NSLog(@"CMSampleBufferCreate error");

                                          // leys check if the CMSampleBuf is valid
                                          bool bValid = CMSampleBufferIsValid(cmBuf);

                                          // examine this values for debugging info....
                                          CMTime cmTimeSampleDuration = CMSampleBufferGetDuration(cmBuf);
                                          CMTime cmTimePresentationTime = CMSampleBufferGetPresentationTimeStamp(cmBuf);

                                          if (status != noErr)
                                              NSLog(@"Invalid Buffer found!!! possible CMSampleBufferCreate error?");


                                          if(!assetWriterAudioInput[t].readyForMoreMediaData)
                                              printf(" readyForMoreMediaData FAILED  - Had to Drop a frame\n");
                                          else
                                          {
                                              if(assetWriter.status == AVAssetWriterStatusWriting)
                                              {
                                                  BOOL r = YES;
                                                  r = [assetWriterAudioInput[t] appendSampleBuffer:cmBuf];
                                                  if (!r)
                                                  {
                                                      NSLog(@"appendSampleBuffer error");
                                                  }
                                                  else
                                                      success = true;

                                              }
                                              else
                                                  printf("AssetWriter Not ready???!? \n");
                                        }

                              if (cmBuf)
                              {
                                  CFRelease(cmBuf);
                                  cmBuf = 0;
                              }
                              if(blockBuf)
                              {
                                  CFRelease(blockBuf);
                                  blockBuf = 0;
                              }
                          }
                          nAudioSamplesWritten = nAudioSamplesWritten + nAudioSamplesCaptured;
                      }

                      if(success)
                      {
                          printf("Audio tracks Added..");
                      }
                      else
                      {
                          NSError* nsERR = [assetWriter error];
                          printf("Problem Adding Audio tracks / samples");
                      }
                      printf("Success \n");
                }


              if (buffer)
              {
                  CVBufferRelease(buffer);
              }
          }
      }
      AVAssetWriterStatus sta = [assetWriter status];
      CMTime endTime = CMTimeMake((numFramesToCapture-1), fpsRate);

      if (audioFormatDesc)
      {
          CFRelease(audioFormatDesc);
          audioFormatDesc = 0;
      }

      // Finish the session
      StopVideoCaptureProcess();
      [assetWriterInputVideo markAsFinished];
      for (int t = 0; t < numAudioChannels; t++)
      {
          [assetWriterAudioInput[t] markAsFinished];
      }

      [assetWriter endSessionAtSourceTime:endTime];


      bool finishedSuccessfully = [assetWriter finishWriting];
      if (finishedSuccessfully)
          NSLog(@"Writing file ended successfully \n");
      else
      {
          NSLog(@"Writing file ended WITH ERRORS...");
          sta = [assetWriter status];
          if (sta != AVAssetWriterStatusCompleted)
          {
              NSError* nsERR = [assetWriter error];
              printf("investoigating the error \n");
          }
      }
                    }
                    else
                    {
      NSLog(@"Unable to Add the InputVideo Asset Writer to the AssetWriter, file will not be written - Exiting");
                    }

                    if (audioFormatDesc)
      CFRelease(audioFormatDesc);
                }


                for (int i = 0; i < 16; i++)
                {
                    if (vecOfNonInterleavedAudioSamplesPtrs[i])
                    {
      bfFree(2002 * sizeof(unsigned short), vecOfNonInterleavedAudioSamplesPtrs[i]);
      vecOfNonInterleavedAudioSamplesPtrs[i] = nullptr;
                    }
                }

            }
            else
            {
                NSLog(@"Unable to find a valid input signal - Exiting");
            }


            if (pAudioSamples)
                delete pAudioSamples;
        }
    }
    return 0;
}

这是一个连接到某些特殊硬件的非常基本的示例(代码是遗漏的)

它抓取视频和音频帧,然后有音频处理从交错PCM到每个轨道的PCM数据的单个阵列

然后将每个缓冲区添加到适当的轨道,无论是视频还是音频......

最后,AvAsset内容已完成并关闭,我退出并清理。

非常感谢任何帮助,

干杯,

詹姆斯

1 个答案:

答案 0 :(得分:0)

我终于找到了解决这个问题的有效方案。

解决方案分为两部分:

  1. 我从使用CMAudioSampleBufferCreateWithPacketDescriptions开始 使用CMSampleBufferCreate(..)以及该函数调用的相应参数。

  2. 最初在使用CMSampleBufferCreate进行体验时,我错误地使用了一些参数,它给了我与我最初概述的相同的结果,但仔细检查了我传递给CMSampleTimingInfo结构的值 - 具体来说持续时间部分,我最终使一切正常工作!!

  3. 所以我似乎正在正确地创建CMBlockBufferRef,但是在使用它来创建我传递给AVAssetWriterInput的CMSampleBufRef时我需要更加小心!

    希望这有助于其他人,因为这对我来说是一个令人讨厌的问题!

    • 詹姆斯