iOS Swift:尝试使用写图像缓冲区构建视频,未调用finishWritingWithCompletionHandler。输出视频包含零字节

时间:2014-10-16 17:51:37

标签: ios video swift avfoundation

我试图只从静态图像中编写两个帧来构建视频。我已经在时间参数附近唠叨了一下。似乎最后一步finWritingWithCompletionHandler从未被调用(完成写入......永远不会输出)。只创建了一个零字节.mp4视频。并且没有发生错误。无法弄清楚原因。这是我使用的代码:

func createBackgroundVideo(CompletionHandler: (path: String)->Void) {

    var maybeError: NSError?
    let fileMgr = NSFileManager.defaultManager()
    let docDirectory = NSHomeDirectory().stringByAppendingPathComponent("Documents")
    let videoOutputPath = docDirectory.stringByAppendingPathComponent(BgVideoName)

    if (!fileMgr.removeItemAtPath(videoOutputPath, error: &maybeError)) {
        NSLog("Umable to delete file: %@", maybeError!.localizedDescription)
    }

    println(videoOutputPath)

    let videoWriter = AVAssetWriter(
        URL: NSURL(fileURLWithPath: videoOutputPath),
        fileType: AVFileTypeQuickTimeMovie,
        error: &maybeError
    )

    var videoSettings = [
        AVVideoCodecKey: AVVideoCodecH264,
        AVVideoWidthKey: NSNumber(float: Float(videoWidth)),
        AVVideoHeightKey: NSNumber(float: Float(videoHeight))
    ]

    var avAssetInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
    avAssetInput.expectsMediaDataInRealTime = true

    var adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: avAssetInput, sourcePixelBufferAttributes: nil)

    videoWriter.addInput(avAssetInput)
    videoWriter.startWriting()
    videoWriter.startSessionAtSourceTime(kCMTimeZero)

    var frameCount: Int64 = 0;
    var buffer: CVPixelBufferRef

    //buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue()

    for i in 1...2 {
        buffer = PixelBuffer.pixelBufferFromCGImage2(self.bgImage.CGImage, andSize: CGSizeMake(videoWidth, videoHeight)).takeUnretainedValue()
        var appendOk = false
        var retries: Int = 0

        while (!appendOk && retries < 30) {
            if (adaptor.assetWriterInput.readyForMoreMediaData) {
                let frameTime = CMTimeMake(frameCount, 1);
                appendOk = adaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
                if (!appendOk) {
                    println("some erorr occurred", videoWriter.error)
                } else {
                    println("pixel written")
                }
            } else {
                println("adaptor is not ready....")
                NSThread.sleepForTimeInterval(0.1)
            }
            retries++
        }

        if (!appendOk) {
            println("Error appending image....")
        }

        frameCount++
    }

    avAssetInput.markAsFinished()
    videoWriter.finishWritingWithCompletionHandler({() -> Void in
        println("finished writing...")
        CompletionHandler(path: videoOutputPath)
    })
}

我用Obj-c编写的CGImage方法调用像素缓冲区,(我已经添加了标题和桥接标题,看起来工作正常):

+ (CVPixelBufferRef) pixelBufferFromCGImage2: (CGImageRef) image andSize:(CGSize) size {

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        NSLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                                 size.height, 8, 4*size.width, rgbColorSpace,
                                                 kCGImageAlphaPremultipliedFirst);

    float offsetY = size.height / 2 - CGImageGetHeight(image) / 2;
    float offsetX = size.width / 2 - CGImageGetWidth(image) / 2;

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(offsetX, offsetY, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

感谢阅读。

1 个答案:

答案 0 :(得分:1)

也许您的 videoSettings 字典不完整。尝试设置更多这样的原子信息:

var videoCleanApertureSettings = [AVVideoCleanApertureWidthKey:Int(self.width),
                                 AVVideoCleanApertureHeightKey:Int(self.height),
                       AVVideoCleanApertureHorizontalOffsetKey:0,
                         AVVideoCleanApertureVerticalOffsetKey:0]

var videoAspectRatioSettings = [AVVideoPixelAspectRatioHorizontalSpacingKey:1,
                                  AVVideoPixelAspectRatioVerticalSpacingKey:1]

var codecSettings = [AVVideoCleanApertureKey:videoCleanApertureSettings,
                  AVVideoPixelAspectRatioKey:videoAspectRatioSettings]

var videoSettings = [AVVideoCodecKey:AVVideoCodecH264,
     AVVideoCompressionPropertiesKey:codecSettings,
                     AVVideoWidthKey:Int(self.width),
                    AVVideoHeightKey:Int(self.height)]

您在时间戳零开始播放视频。没关系:

[self.videoWriter startSessionAtSourceTime:kCMTimeZero];

也许你的视频图像的时间戳不够远,看不到东西。如果您需要几秒钟来显示图像,您可以执行以下操作:

int64_t newFrameNumber = (uint64_t)(presentationTimeInSeconds * 60.);
CMTime frameTime = CMTimeMake(newFrameNumber, 60);

使用60作为时间刻度,您可以使用秒作为单位并获得良好的分辨率。

要以“实时”方式制作幻灯片,您可以使用 NSDate 对时间戳进行编码:

int64_t newFrameNumber = (uint64_t)(fabs([self.videoStartDate timeIntervalSinceNow]) * 60.);

其中 self.videoStartDate 是您在启动视频后立即设置的[NSDate date]值。

CMTime 告诉解码器何时显示图像,而不是显示图像的时间。您从 frameCount 0 开始,它告诉解码器立即显示第一张图像。也许您尝试从 1 开始,看看视频是否会稍后显示第一张图片。

如果您使用startSessionAtSourceTime,则必须在调用endSessionAtSourceTime之前使用finishWritingWithCompletionHandler结束视频,否则可能无法调用关闭。将最后一个时间戳传递给 endSessionAtSourceTime

您可以尝试从apple中弃用的方法,看看这可能是一个错误。 标记为完成调用后

videoWriter.finishWriting()

而不是finishWritingWithCompletionHandler并等待文件由磁盘写入器关闭。 (即通过使用调度队列)

int64_t delayInSeconds = 1;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, delayInSeconds * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){

     // call your completion handler after the file has been written
})

这是swift版本:

let delayInSeconds:Double = 0.5
let popTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delayInSeconds * Double(NSEC_PER_SEC)))
dispatch_after(popTime, dispatch_get_main_queue(), {

   println("finished writing...")
   CompletionHandler(path: videoOutputPath)
})

也许您的 videowriter 实例在离开课程后不再存在。 (该块是异步调用的,但是您在函数中本地声明了 videowriter .ARC可能会在调用完成处理程序之前释放该对象。)全局声明编写器以解决此问题。

提示:

将CGColorSpace保留在内存中(即在此处创建类var或静态var),因为CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();需要很长时间才能初始化。在对视频进行编码之前只执行一次这样可以大大提高应用程序的执行速度!