AVFoundation从图像错误帧率创建视频

时间:2014-05-18 10:37:44

标签: avfoundation video-processing frame-rate avassetwriter

我正在尝试使用AVFoundation从图像创建视频。 关于这种方法已经存在多个线索,但我相信其中许多线程与我在这里遇到的问题相同。

视频在iPhone上运行正常,但它不能在VLC上播放,也不能在Facebook和Vimeo上正常播放(有时一些帧不同步)。 VLC说视频的帧速率是0.58 fps,但它应该超过24对吗?

有谁知道造成这种行为的原因是什么?

以下是用于制作视频的代码:

self.videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeMPEG4 error:&error];
    // Codec compression settings
    NSDictionary *videoSettings = @{
                                    AVVideoCodecKey : AVVideoCodecH264,
                                    AVVideoWidthKey : @(self.videoSize.width),
                                    AVVideoHeightKey : @(self.videoSize.height),
                                    AVVideoCompressionPropertiesKey : @{
                                            AVVideoAverageBitRateKey : @(20000*1000), // 20 000 kbits/s
                                            AVVideoProfileLevelKey : AVVideoProfileLevelH264High40,
                                            AVVideoMaxKeyFrameIntervalKey : @(1)
                                            }
                                    };

    AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                     sourcePixelBufferAttributes:nil];

    videoWriterInput.expectsMediaDataInRealTime = NO;
    [self.videoWriter addInput:videoWriterInput];
    [self.videoWriter startWriting];
    [self.videoWriter startSessionAtSourceTime:kCMTimeZero];

    [adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
        CMTime time = CMTimeMakeWithSeconds(0, 1000);

        for (Segment* segment in segments) {
            @autoreleasepool {
                UIImage* image = segment.segmentImage;
                CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
                [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
                CVPixelBufferRelease(buffer);

                CMTime millisecondsDuration = CMTimeMake(segment.durationMS.integerValue, 1000);
                time = CMTimeAdd(time, millisecondsDuration);
            }
        }
        [videoWriterInput markAsFinished];
        [self.videoWriter endSessionAtSourceTime:time];
        [self.videoWriter finishWritingWithCompletionHandler:^{
            NSLog(@"Video writer has finished creating video");
        }];
    }];

- (CVPixelBufferRef)pixelBufferFromImage:(UIImage*)image withImageSize:(CGSize)size{
    CGImageRef cgImage = image.CGImage;
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        DebugLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, 2);
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(cgImage), CGImageGetHeight(cgImage)), cgImage);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

+ (BOOL)appendToAdapter:(AVAssetWriterInputPixelBufferAdaptor*)adaptor
            pixelBuffer:(CVPixelBufferRef)buffer
                 atTime:(CMTime)time{
    while (!adaptor.assetWriterInput.readyForMoreMediaData) {
        [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:0.1]];
    }
    return [adaptor appendPixelBuffer:buffer withPresentationTime:time];
}

1 个答案:

答案 0 :(得分:4)

查看代码,我认为问题在于您使用时间戳的方式......

CMTime由Value和Timescale组成。我想到这一点的方法是将时间刻度部分视为帧速率(这是不准确的,但这是一个非常有用的心理工具,适用于你想要做的事情)。

30FPS的第一帧视频将是:

CMTimeMake(1, 30);

或者第60帧每秒30帧,巧合的是,这也是(60除以30)你视频的2秒点。

CMTimeMake(60, 30); 

您指定1000作为时间刻度,这比您需要的更高。在循环中,您似乎正在放置框架然后添加第二个并放置另一个框架。这就是获得0.58 FPS的原因......(虽然我预计会有1个FPS,但是谁知道编解码器的复杂性)。

相反,您想要做的是循环30次(如果您希望图像显示1秒/ 30帧),并将SAME图像放在每个帧上。这应该会让你达到30 FPS。当然,如果你想要24FPS,你可以使用24的时标,无论你的要求是什么。

尝试重写代码的这一部分:

[adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
    CMTime time = CMTimeMakeWithSeconds(0, 1000);

    for (Segment* segment in segments) {
        @autoreleasepool {
            UIImage* image = segment.segmentImage;
            CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
            [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
            CVPixelBufferRelease(buffer);

            CMTime millisecondsDuration = CMTimeMake(segment.durationMS.integerValue, 1000);
            time = CMTimeAdd(time, millisecondsDuration);
        }
    }
    [videoWriterInput markAsFinished];
    [self.videoWriter endSessionAtSourceTime:time];
    [self.videoWriter finishWritingWithCompletionHandler:^{
        NSLog(@"Video writer has finished creating video");
    }];
}];

更像是这样:

[adaptor.assetWriterInput requestMediaDataWhenReadyOnQueue:self.photoToVideoQueue usingBlock:^{
    // Let's start at the first frame with a timescale of 30 FPS
    CMTime time = CMTimeMake(1, 30);

    for (Segment* segment in segments) {
        @autoreleasepool {
            UIImage* image = segment.segmentImage;
            CVPixelBufferRef buffer = [self pixelBufferFromImage:image withImageSize:self.videoSize];
            for (int i = 1; i <= 30; i++) {
                [ImageToVideoManager appendToAdapter:adaptor pixelBuffer:buffer atTime:time];
                time = CMTimeAdd(time, CMTimeMake(1, 30)); // Add another "frame"
            }
            CVPixelBufferRelease(buffer);

        }
    }
    [videoWriterInput markAsFinished];
    [self.videoWriter endSessionAtSourceTime:time];
    [self.videoWriter finishWritingWithCompletionHandler:^{
        NSLog(@"Video writer has finished creating video");
    }];
}];