在我正在编写的cocoa应用程序中,我从Quartz Composer渲染器(NSImage对象)获取快照图像,我想在720 * 480大小,25 fps和H264编解码器的QTMovie中对它们进行编码使用addImage:方法。以下是相应的代码:
qRenderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(720,480) colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:[QCComposition compositionWithFile:qcPatchPath]]; // define an "offscreen" Quartz composition renderer with the right image size
imageAttrs = [NSDictionary dictionaryWithObjectsAndKeys: @"avc1", // use the H264 codec
QTAddImageCodecType, nil];
qtMovie = [[QTMovie alloc] initToWritableFile: outputVideoFile error:NULL]; // initialize the output QT movie object
long fps = 25;
frameNum = 0;
NSTimeInterval renderingTime = 0;
NSTimeInterval frameInc = (1./fps);
NSTimeInterval myMovieDuration = 70;
NSImage * myImage;
while (renderingTime <= myMovieDuration){
if(![qRenderer renderAtTime: renderingTime arguments:NULL])
NSLog(@"Rendering failed at time %.3fs", renderingTime);
myImage = [qRenderer snapshotImage];
[qtMovie addImage:myImage forDuration: QTMakeTimeWithTimeInterval(frameInc) withAttributes:imageAttrs];
[myImage release];
frameNum ++;
renderingTime = frameNum * frameInc;
}
[qtMovie updateMovieFile];
[qRenderer release];
[qtMovie release];
它有效,但我的应用程序无法在我的新MacBook Pro上实时执行此操作,而我知道QuickTime Broadcaster可以在H264中实时编码图像,其质量甚至高于我使用的图像,同一台电脑。
为什么?这是什么问题?这是硬件管理问题(多核线程,GPU,......)还是我错过了什么?让我作为Apple开发世界中的新手(2周练习),包括Objective-C,cocoa,X-code,Quicktime和Quartz Composer库等。
感谢您的帮助
答案 0 :(得分:5)
AVFoundation是一种将QuartzComposer动画渲染为H.264视频流的更有效方法。
size_t width = 640;
size_t height = 480;
const char *outputFile = "/tmp/Arabesque.mp4";
QCComposition *composition = [QCComposition compositionWithFile:@"/System/Library/Screen Savers/Arabesque.qtz"];
QCRenderer *renderer = [[QCRenderer alloc] initOffScreenWithSize:NSMakeSize(width, height)
colorSpace:CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB) composition:composition];
unlink(outputFile);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:@(outputFile)] fileType:AVFileTypeMPEG4 error:NULL];
NSDictionary *videoSettings = @{ AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : @(width), AVVideoHeightKey : @(height) };
AVAssetWriterInput* writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
[videoWriter addInput:writerInput];
[writerInput release];
AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:NULL];
int framesPerSecond = 30;
int totalDuration = 30;
int totalFrameCount = framesPerSecond * totalDuration;
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
__block long frameNumber = 0;
dispatch_queue_t workQueue = dispatch_queue_create("com.example.work-queue", DISPATCH_QUEUE_SERIAL);
NSLog(@"Starting.");
[writerInput requestMediaDataWhenReadyOnQueue:workQueue usingBlock:^{
while ([writerInput isReadyForMoreMediaData]) {
NSTimeInterval frameTime = (float)frameNumber / framesPerSecond;
if (![renderer renderAtTime:frameTime arguments:NULL]) {
NSLog(@"Rendering failed at time %.3fs", frameTime);
break;
}
CVPixelBufferRef frame = (CVPixelBufferRef)[renderer createSnapshotImageOfType:@"CVPixelBuffer"];
[pixelBufferAdaptor appendPixelBuffer:frame withPresentationTime:CMTimeMake(frameNumber, framesPerSecond)];
CFRelease(frame);
frameNumber++;
if (frameNumber >= totalFrameCount) {
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[renderer release];
NSLog(@"Rendered %ld frames.", frameNumber);
break;
}
}
}];
在我的测试中,这大约是使用QTKit的发布代码的两倍。最大的改进似乎来自于将H.264编码传递给GPU而不是在软件中执行。从快速浏览一下配置文件看,剩下的瓶颈似乎是组合物本身的渲染,并将渲染后的数据从GPU读回到像素缓冲区。显然,你的作文的复杂性会对此产生一些影响。
有可能通过使用QCRenderer
提供快照CVOpenGLBufferRef
的能力来进一步优化这一点,这可能会将帧的数据保留在GPU上,而不是将其读回来将其移除到编码器。尽管如此,我并没有看得太远。