我正在研究更多小型项目,后来将其纳入新项目。它基本上是一个测试单元。
我正在做的是创建AVCaptureSession
,然后为OutputSampleBufferDelegate
创建方法。在该方法中,我将sampleBuffer
转换为UIImage
并保存UIImage
。当我在iPhone 4上运行应用程序时,它每秒只能保存2-3张图像。必须有一种更有效的方法来保存图像。
有人可以帮助我加快速度吗?
谢谢!
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
UIImage *resultUIImage = [self imageFromSampleBuffer:sampleBuffer];
NSData *imageData = [NSData dataWithData:UIImagePNGRepresentation(resultUIImage)];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [paths objectAtIndex:0];
CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
NSString *filename = [NSString stringWithFormat:@"%f.png", CMTimeGetSeconds(frameTime)];
NSString *finalPath = [path stringByAppendingString:filename];
[imageData writeToFile:finalPath atomically:YES];
}
// Create a UIImage from sample buffer data
- (UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return image;
}
答案 0 :(得分:7)
使用此代码,我可以节省将图像保存到0.1秒的时间。
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
double frameTime = CFAbsoluteTimeGetCurrent();
UIImage *resultUIImage = [self imageFromSampleBuffer:sampleBuffer];
// takes freaking forever to do.
double pre = CFAbsoluteTimeGetCurrent();
NSData *imageData = UIImageJPEGRepresentation(resultUIImage, 0.9);
NSLog(@"It took to write the file:%f",CFAbsoluteTimeGetCurrent()-pre);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory
,NSUserDomainMask
, YES);
NSString *path = [paths objectAtIndex:0];
NSString *filename = [NSString stringWithFormat:@"%f.png", frameTime];
NSString *finalPath = [path stringByAppendingString:filename];
[imageData writeToFile:finalPath atomically:YES];
}
答案 1 :(得分:4)
如果您在第一种方法中注释掉以下行,可以查看可以生成的图像数量:
[imageData writeToFile:finalPath atomically:YES];
我说的原因是你要花很多时间把这个图像写到磁盘上。看看如何在不将图像写入磁盘的情况下执行它将会很有趣。至少通过这种方式,您将知道是否所有时间都用于实际创建图像而不是存储图像。或者你可以像另一张提到的海报那样做,并使用仪器来计算你在每种方法中的使用时间。
如果事实证明将图像写入磁盘的时间太长,那么我建议尝试实现一种缓存机制,将内存中的图像缓存并稍后将其写入磁盘。
尝试调用writeToFile:atomically:在后台线程上也可能会有所帮助。