面对面的比较

时间:2016-01-08 13:54:31

标签: ios objective-c image-comparison

我试图制作一个识别该人识别该人脸的识别应用程序。我已经完成了人脸检测部分,但我无法找到一种方法来将脸部与存储在应用程序中的相册中的照片进行比较。

这是面部检测代码:

-(void)markFaces:(UIImageView *)facePicture

{

// draw a CI image with the previously loaded face detection picture

CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];



// create a face detector

CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace

                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];



// create an array containing all the detected faces from the detector

NSArray* features = [detector featuresInImage:image];


for(CIFaceFeature* faceFeature in features)

{

    // get the width of the face

    CGFloat faceWidth = faceFeature.bounds.size.width;



    // create a UIView using the bounds of the face

    UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];



    // add a border around the newly created UIView

    faceView.layer.borderWidth = 1;

    faceView.layer.borderColor = [[UIColor redColor] CGColor];



    // add the new view to create a box around the face

    [self.view addSubview:faceView];



    if(faceFeature.hasLeftEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the leftEyeView based on the face

        [leftEyeView setCenter:faceFeature.leftEyePosition];

        // round the corners

        leftEyeView.layer.cornerRadius = faceWidth*0.15;

        // add the view to the window

        [self.view addSubview:leftEyeView];

    }



    if(faceFeature.hasRightEyePosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];

        // change the background color of the eye view

        [leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];

        // set the position of the rightEyeView based on the face

        [leftEye setCenter:faceFeature.rightEyePosition];

        // round the corners

        leftEye.layer.cornerRadius = faceWidth*0.15;

        // add the new view to the window

        [self.view addSubview:leftEye];

    }



    if(faceFeature.hasMouthPosition)

    {

        // create a UIView with a size based on the width of the face

        UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];

        // change the background color for the mouth to green

        [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];

        // set the position of the mouthView based on the face

        [mouth setCenter:faceFeature.mouthPosition];

        // round the corners

        mouth.layer.cornerRadius = faceWidth*0.2;

        // add the new view to the window

        [self.view addSubview:mouth];

    }

}

}



-(void)faceDetector

{

// Load the picture for face detection

UIImageView* image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"testpicture.png"]];



// Draw the face detection image

[self.view addSubview:image];



// Execute the method used to markFaces in background

[self performSelectorInBackground:@selector(markFaces:) withObject:image];



// flip image on y-axis to match coordinate system used by core image

[image setTransform:CGAffineTransformMakeScale(1, -1)];



// flip the entire window to make everything right side up

[self.view setTransform:CGAffineTransformMakeScale(1, -1)];





}

1 个答案:

答案 0 :(得分:1)

来自文档:

  

核心图像可以分析和查找图像中的人脸。 执行   人脸检测,而非识别。人脸检测是识别   包含人脸特征的矩形,而面部   认可是对特定人脸的识别(John,Mary,   等等)。在核心图像检测到面部后,它可以提供   有关脸部特征的信息,例如眼睛和嘴巴位置。它   还可以跟踪视频中已识别面部的位置。

不幸的是苹果还没有提供API来识别脸部。您可以查看third party libraries.