AVAssetExportSession在视频之间组合视频文件和冻结帧

时间:2015-04-24 11:44:40

标签: ios video avfoundation avassetexportsession avmutablecomposition

我有一个将视频文件组合在一起制作长视频的应用。视频之间可能存在延迟(例如,V1在t = 0s时开始并且运行5秒,V1在t = 10s时开始)。在这种情况下,我希望视频冻结V1的最后一帧,直到V2开始。

我正在使用下面的代码,但在视频之间,整个视频都会变白。

我有什么想法可以得到我正在寻找的效果吗?

谢谢!

int main(int argc , char *argv[])
{
    pid_t childPID;
    const int M=atoi(argv[1]);
    int shm_fd;
    double* shared_memory;
    const int msize = 3*M*M*sizeof(double);
    const char *name="MATRIX_MULTIPLICATION";

    shm_fd=shm_open(name,O_CREAT | O_EXCL | O_RDWR, S_IRWXU | S_IRWXG);
    ftruncate(shm_fd,msize);
    shared_memory=(double *)mmap(NULL, msize, PROT_READ | PROT_WRITE,  MAP_SHARED, shm_fd, 0);

    double *a=shared_memory;
    double *b=shared_memory+ M*M;
    double *c=b+ M*M;

    populate(M,a,b);
    printf("Done populating\n");

    //fork
    int i;
    long t1,t2,t11,t22;
    t11=clock();
    for(i=0;i<M;i++)
    {
        childPID=fork();
        if(childPID>=0)
        {
            if(childPID==0)
            {
                t1=clock();
                computeRow(M,i,a,b,c);
                t2=clock();
                printf("Time for parallel Multiplication , Row number %d \n",i);
                printf("%ld ms\n",t2-t1);
                exit(0);
            }
        }
    }

    int x=0;
    int stat;
    do
    {
        x=waitpid(-1,&stat,WNOHANG);
    }while(x==0);

    t22=clock();

    printf("It take %ld ms to finish Matrix multiplication using fork and shared_memory \n\n",t22-t11);

shm_unlink(name);
    return 0;
}

@interface VideoJoins : NSObject

-(instancetype)initWithURL:(NSURL*)url
                  andDelay:(NSTimeInterval)delay;

@property (nonatomic, strong) NSURL* url;
@property (nonatomic) NSTimeInterval delay;

@end

编辑:搞定了。以下是我如何从这些图像中提取图像并生成视频:

+(void)joinVideosSequentially:(NSArray*)videoJoins
                 withFileType:(NSString*)fileType
                     toOutput:(NSURL*)outputVideoURL
                 onCompletion:(dispatch_block_t) onCompletion
                      onError:(ErrorBlock) onError
                     onCancel:(dispatch_block_t) onCancel
{
  //From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
  // Didn't add support for portrait+landscape.
  AVMutableComposition *composition = [AVMutableComposition composition];

  AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

  AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];

  CMTime startTime = kCMTimeZero;

  /*videoClipPaths is a array of paths of the video clips recorded*/

  //for loop to combine clips into a single video
  for (NSInteger i=0; i < [videoJoins count]; i++)
  {
    VideoJoins* vj = videoJoins[i];
    NSURL *url  = vj.url;
    NSTimeInterval nextDelayTI = 0;
    if(i+1 < [videoJoins count])
    {
      VideoJoins* vjNext = videoJoins[i+1];
      nextDelayTI = vjNext.delay;
    }

    AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];

    CMTime assetDuration = [asset duration];
    CMTime assetDurationWithNextDelay = assetDuration;
    if(nextDelayTI != 0)
    {
      CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
      assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
    }

    AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
    AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

    //set the orientation
    if(i == 0)
    {
      [compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
    }

    BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
    ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];

    startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
  }

  //Delete output video if it exists
  NSString* outputVideoString = [outputVideoURL absoluteString];
  if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
  {
    [[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
  }

  //export the combined video
  AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
                                                                    presetName:AVAssetExportPresetHighestQuality];

  exporter.outputURL = outputVideoURL;
  exporter.outputFileType = fileType;
  exporter.shouldOptimizeForNetworkUse = YES;

  [exporter exportAsynchronouslyWithCompletionHandler:^(void)
  {
    switch (exporter.status)
    {
      case AVAssetExportSessionStatusCompleted: {
        onCompletion();
        break;
      }
      case AVAssetExportSessionStatusFailed:
      {
        NSLog(@"Export Failed");
        NSError* err = exporter.error;
        NSLog(@"ExportSessionError: %@", [err localizedDescription]);
        onError(err);
        break;
      }
      case AVAssetExportSessionStatusCancelled:
        NSLog(@"Export Cancelled");
        NSLog(@"ExportSessionError: %@", [exporter.error localizedDescription]);
        onCancel();
        break;
    }
  }];
}

2 个答案:

答案 0 :(得分:2)

AVMutableComposition只能将视频拼接在一起。我做了两件事:

  • 将第一个视频的最后一帧提取为图像。
  • 使用此图片制作视频(持续时间取决于您的要求)。

然后你可以编写这三个视频(V1,V2和你的单个图像视频)。这两项任务都很容易做到。

要从视频中提取图像,请查看此link。如果您不想使用已接受答案使用的MPMoviePlayerController,请查看史蒂夫的其他答案。

使用图片制作视频请查看此link。问题是关于音频的问题,但我认为你不需要音频。所以,看看问题本身提到的方法。

<强>更新: 有一种更简单的方法,但它有一个缺点。您可以有两个AVPlayer。第一个播放您的视频,其间有白色框架。其他人坐在后面的视频1的最后一帧暂停。所以当中间部分到来时,你会看到第二个AVPlayer加载了最后一帧。因此,整体看起来视频1暂停了。相信我,当玩家改变时,肉眼无法辨认出来。但明显的缺点是导出的视频与空白帧相同。所以如果你只是想在你的应用程序中播放它,你可以采用这种方法。

答案 1 :(得分:0)

视频资产的第一帧始终是黑色或白色

 CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
 CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
 nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);

将超过400个衬衫视频合并为一个。