在我的凸轮打开时检测到脸部

时间:2017-08-15 07:15:23

标签: ios objective-c face-recognition

我需要构建一个只有凸轮视图的应用程序,它应该检测到我的凸轮正在看着一张脸,有人能指出我正确的方向吗? 我已经构建了一些可以检测到图像上的面部的东西,但我需要使用凸轮,这是我到目前为止所做的:

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    NSString *path = [[NSBundle mainBundle] pathForResource:@"picture" ofType:@"JPG"];
    NSURL *url = [NSURL fileURLWithPath:path];

    CIContext *context = [CIContext contextWithOptions:nil];

    CIImage *image = [CIImage imageWithContentsOfURL:url];

    NSDictionary *options = @{CIDetectorAccuracy: CIDetectorAccuracyHigh};

    CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:options];

    NSArray *features = [detector featuresInImage:image];

}

我做了以下事情:

-(void)viewWillAppear:(BOOL)animated{
    _session = [[AVCaptureSession alloc] init];
    [_session setSessionPreset:AVCaptureSessionPresetPhoto];

    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error;
    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];

    if([_session canAddInput:deviceInput]){
        [_session addInput:deviceInput];
    }

    AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:_session];
    [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
    CALayer *rootLayer = [[self view] layer];
    [rootLayer setMasksToBounds:YES];

    CGRect frame = self.frameCapture.frame;
    [previewLayer setFrame:frame];

    [rootLayer insertSublayer:previewLayer atIndex:0];
    [_session startRunning];

}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
    for(AVMetadataObject *metadataObject in metadataObjects) {
        if([metadataObject.type isEqualToString:AVMetadataObjectTypeFace]) {

           _faceDetectedLabel.text = @"face detected";
        }
    }
}

但是它仍然没有检测到任何面孔,我做错了什么?

1 个答案:

答案 0 :(得分:1)

在您拥有某些数据之前,您应该先添加元数据输出。

AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
// create a serial queue to handle metadata output
dispatch_queue_t metadataQueueOutput = dispatch_queue_create("com.YourAppName.metaDataQueue.OutputQueue", DISPATCH_QUEUE_SERIAL);
[metadataOutput setMetadataObjectsDelegate:self queue:metadataQueueOutput];
if ([_session canAddOutput:metadataOutput]) {
    [strongSelf.session addOutput:metadataOutput];
}
// set object types that you are interested, then you should not check type in output callback
metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeFace];

那应该有用。如果有,请告诉我