我正在开发一个项目,我在其中生成一个来自UIImage的视频,我在这里找到了代码,为了优化它,我现在正在苦苦挣扎几天(对于大约300张图片,需要5分钟时间模拟器,因内存而简单地在设备上崩溃。
我将从今天的工作代码开始(我使用arc):
-(void) writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size duration:(int)duration
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
buffer = [self newPixelBufferFromCGImage:[[self.frames objectAtIndex:0] CGImage]];
CVPixelBufferPoolCreatePixelBuffer (NULL, adaptor.pixelBufferPool, &buffer);
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
int frameNumber = [self.frames count];
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
NSLog(@"Entering block with frames: %i", [self.frames count]);
if(!self.frames || [self.frames count] == 0)
{
return;
}
int i = 1;
while (1)
{
if (i == frameNumber)
{
break;
}
if ([writerInput isReadyForMoreMediaData])
{
freeMemory();
NSLog(@"inside for loop %d (%i)",i, [self.frames count]);
UIImage *image = [self.frames objectAtIndex:i];
CGImageRef imageRef = [image CGImage];
CVPixelBufferRef sampleBuffer = [self newPixelBufferFromCGImage:imageRef];
CMTime frameTime = CMTimeMake(1, TIME_STEP);
CMTime lastTime=CMTimeMake(i, TIME_STEP);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (sampleBuffer)
{
[adaptor appendPixelBuffer:sampleBuffer withPresentationTime:presentTime];
i++;
CVPixelBufferRelease(sampleBuffer);
}
else
{
break;
}
}
}
[writerInput markAsFinished];
[videoWriter finishWriting];
self.frames = nil;
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}];
}
现在获取像素缓冲区的功能,我正在努力:
- (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
第一个奇怪的事情:你可以在这个函数中看到,我必须将宽度和高度乘以2,否则,结果视频都搞砸了,我无法理解为什么(我可以发布截图如果它帮助;像素似乎来自我的图像,但宽度不正确,并且视频的半个底部有一个大的黑色方块。)
另一个问题是它需要非常大的内存;我认为像素缓冲区无法很好地解除分配,但我不明白为什么。
最后,它很慢,但我有两个想法来改进它,我没有使用它。
第一个是避免使用UIImage创建我的像素缓冲区,因为我自己使用(uint8_t *)数据生成UIImage。我尝试使用'CVPixelBufferCreateWithBytes',但它不起作用。以下是我尝试的方法:
OSType pixFmt = CVPixelBufferGetPixelFormatType(pxbuffer);
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, width, height, pixFmt, self.composition.srcImage.resultImageData, width*2, NULL, NULL, (__bridge CFDictionaryRef) attributes, &pxbuffer);
(参数与上述函数相同;我的图像数据以每像素16位编码,我找不到一个好的OSType参数来赋予函数。) 如果有人知道如何使用它(也许它不可能使用16位/像素数据?),它将帮助我避免真正无用的转换。
我知道我只在一篇文章中提出了很多问题,但我在这段代码中迷失了方向,我已经尝试了很多组合而且没有一个组合...
答案 0 :(得分:0)
ON
点数与像素数?高dpi视网膜屏幕每点的像素数是两倍。