你如何只使用AVCaptureSession捕捉选择的相机帧?

时间:2010-11-20 10:05:22

标签: iphone cocoa-touch avfoundation

我正在尝试使用AVCaptureSession从前置摄像头获取图像进行处理。到目前为止,每当一个新的框架可用时,我只是将它分配给一个变量,然后运行一个NSTimer,如果有一个新的框架,它会检查每十分之一秒,如果有的话,它会处理它。

我想得到一个框架,冻结相机,并随时随地获取下一帧。像[captureSession getNextFrame]你知道吗?

这是我的代码的一部分,虽然我不确定它可能有多大帮助:

- (void)startFeed {

 loopTimerIndex = 0;

    NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[captureDevices objectAtIndex:1] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.minFrameDuration = CMTimeMake(1, 10);
    captureOutput.alwaysDiscardsLateVideoFrames = true;

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", nil);

    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
    NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
    [captureOutput setVideoSettings:videoSettings];

    captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetLow;
    [captureSession addInput:captureInput];
    [captureSession addOutput:captureOutput];

    imageView = [[UIImage alloc] init];

    [captureSession startRunning];

}

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection {

 loopTimerIndex++;

    NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);

    CGContextRelease(newContext);
    CGColorSpaceRelease(colorSpace);

    imageView = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
    [delegate updatePresentor:imageView];
    if(loopTimerIndex == 1) {
        [delegate feedStarted];
    }

    CGImageRelease(newImage);
    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    [pool drain];

}

1 个答案:

答案 0 :(得分:3)

您不会主动轮询相机以恢复帧,因为这不是捕获过程的架构方式。相反,如果您只想每十分之一秒显示一次帧而不是每1/30或更快,那么您应该忽略两者之间的帧。

例如,您可以维护一个时间戳,您可以在每次触发-captureOutput:didOutputSampleBuffer:fromConnection:时对其进行比较。如果时间戳从现在开始大于或等于0.1秒,则处理并显示相机帧并将时间戳重置为当前时间。否则,忽略该帧。