使用AVCaptureVideoDataOutputSampleBufferDelegate方法延迟执行dispatch_async块

时间:2013-10-14 05:36:09

标签: ios avfoundation grand-central-dispatch

我目前正在开展涉及AVCaptureVideoDataOutputSampleBufferDelegate眨眼检测的项目。

我在委托方法

中有以下dispatch_async
(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{

//Initialisation of buffer and UIImage and CIDetector, etc.

    dispatch_async(dispatch_get_main_queue(), ^(void) {
        if(features.count > 0){
            CIFaceFeature *feature = [features objectAtIndex:0];
            if([feature leftEyeClosed]&&[feature rightEyeClosed]){
                flag = TRUE;
            }else{
                if(flag){
                    blinkcount++;
                    //Update UILabel containing blink count. The count variable is incremented from here.
                }
            flag = FALSE;
            }
    }
}

上面显示的方法被连续调用并处理来自摄像机的视频输入。 flag布尔值跟踪眼睛是否在最后一帧中关闭或打开,以便可以检测到眨眼。有大量的帧被丢弃,但仍然可以正确检测到闪烁,所以我想处理的fps足够了。

我的问题是UILabel在执行眨眼后大幅延迟(~1秒)后得到更新。这使得应用程序看起来迟钝且不直观。我尝试在没有调度的情况下编写UI更新代码,但这是不行的。我可以做些什么来使UILabel在眨眼后立即更新?

1 个答案:

答案 0 :(得分:1)

如果没有更多的代码,很难确切地知道这里发生了什么,但是在调度代码之上,你说:

//Initialisation of buffer and UIImage and CIDetector, etc.

如果你每次真正初始化探测器,那可能不是最理想的 - 让它长寿。我不知道确定初始化CIDetector是昂贵的,但它是一个开始的地方。此外,如果你真的在这里使用UIImage,那也是次优的。不要通过UIImage,采取更直接的路线:

CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib];
NSArray* features = [longLivedDetector featuresInImage: ciImage];

最后,在后台线程上执行功能检测,并仅将UILabel更新封送回主线程。像这样:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    if (!_longLivedDetector) {
        _longLivedDetector = [CIDetector detectorOfType:CIDetectorTypeFace context: ciContext options: whatever];
    }

    CVImageBufferRef ib = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage* ciImage = [CIImage imageWithCVPixelBuffer: ib];
    NSArray* features = [_longLivedDetector featuresInImage: ciImage];
    if (!features.count)
        return;

    CIFaceFeature *feature = [features objectAtIndex:0];
    const BOOL leftAndRightClosed = [feature leftEyeClosed] && [feature rightEyeClosed];

    // Only trivial work is left to do on the main thread.
    dispatch_async(dispatch_get_main_queue(), ^(void){
        if (leftAndRightClosed) {
            flag = TRUE;
        } else {
            if (flag) {
                blinkcount++;
                //Update UILabel containing blink count. The count variable is incremented from here.
            }
            flag = FALSE;
        }
    });
}

最后,您还应该记住,面部特征检测是一项非平凡的信号处理任务,需要大量的计算(即时间)才能完成。我希望如果没有硬件更快的速度,就没有办法让它变得更快。