captureOutput:didOutputSampleBuffer:fromConnection性能问题

时间:2011-05-08 01:17:29

标签: iphone ios ios4 avcapturesession

我使用AVCaptureSessionPhoto允许用户拍摄高分辨率照片。在拍照时,我使用captureOutput:didOutputSampleBuffer:fromConnection:方法在捕获时检索缩略图。然而,虽然我尝试在委托方法中做最小的工作,但应用程序变得有点迟钝(我说有点因为它仍然可用)。此外,iPhone往往会炙手可热。

有没有办法减少iPhone必须做的工作量?

我通过执行以下操作设置AVCaptureVideoDataOutput

self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; 
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES;

// Specify the pixel format
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL);
[self.videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] 
                                                   forKey:(id)kCVPixelBufferPixelFormatTypeKey];

这是我的captureOutput:didOutputSampleBuffer:fromConnection(以及协助imageRefFromSampleBuffer方法):

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection {

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if (videoDataOutputConnection == nil) {
    videoDataOutputConnection = connection;
}
if (getThumbnail > 0) {
    getThumbnail--;
    CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer];
    UIImage *image;
    if (self.prevLayer.mirrored) {
        image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored];
    }
    else {
        image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight];
    }
    [self.cameraThumbnailArray insertObject:image atIndex:0];
    dispatch_async(dispatch_get_main_queue(), ^{
        self.freezeCameraView.image = image;
    });
    CFRelease(tempThumbnail);
}
sampleBuffer = nil;
[pool release];

}

-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
CVPixelBufferLockBaseAddress(imageBuffer,0); 
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); 
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
CGImageRef newImage = CGBitmapContextCreateImage(context); 
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace);
return newImage;

}

2 个答案:

答案 0 :(得分:1)

不推荐使用minFrameDuration,这可能有效:

AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);

答案 1 :(得分:0)

要改进,我们应该设置AVCaptureVideoDataOutput

output.minFrameDuration = CMTimeMake(1, 10);

我们为每个帧指定最小持续时间(使用此设置进行播放以避免在队列中等待太多帧,因为它可能导致内存问题)。它类似于最大帧速率的倒数。在此示例中,我们将最小帧持续时间设置为1/10秒,因此最大帧速率为10fps。我们说我们每秒处理的帧数不能超过10帧。

希望有所帮助!