使用CoreImage裁剪出一张脸

时间:2013-02-24 14:16:28

标签: objective-c crop face-detection core-image

我需要裁剪出给定图像的面部/多个面部,并将裁剪的面部图像用于其他用途。我正在使用CoreImage的CIDetectorTypeFace。问题是新的UIImage只包含检测到的面部需要更大的尺寸,因为头发被切断或下颚被切断。如何增加initWithFrame:faceFeature.bounds ??的大小? 我正在使用的示例代码:

    CIImage* image = [CIImage imageWithCGImage:staticBG.image.CGImage];
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
    NSArray* features = [detector featuresInImage:image];

    for(CIFaceFeature* faceFeature in features)
    {
       UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
       faceView.layer.borderWidth = 1;
       faceView.layer.borderColor = [[UIColor redColor] CGColor];
       [staticBG addSubview:faceView];

       // cropping the face
       CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], faceFeature.bounds);
       [resultView setImage:[UIImage imageWithCGImage:imageRef]];
       CGImageRelease(imageRef);
    }

注意:我用来显示检测到的面部区域的红框与裁剪后的图像完全不匹配。也许我没有正确显示框架,但由于我不需要显示框架,我真的需要裁剪掉的面孔,我并不担心它。

1 个答案:

答案 0 :(得分:4)

不确定,但你可以试试

CGRect biggerRectangle = CGRectInset(faceFeature.bounds, someNegativeCGFloatToIncreaseSizeForXAxis, someNegativeCGFloatToIncreaseSizeForYAxis);
CGImageRef imageRef = CGImageCreateWithImageInRect([staticBG.image CGImage], biggerRectangle);

https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CGGeometry/Reference/reference.html#//apple_ref/c/func/CGRectInset