我正在尝试将UIImage
裁剪为使用内置CoreImage
脸部侦测功能检测到的脸部。我似乎能够正确地检测到脸部,但是当我尝试将UIImage
裁剪到脸部边界时,它远远不够正确。我的面部检测代码如下所示:
-(NSArray *)facesForImage:(UIImage *)image {
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
NSDictionary *opts = @{CIDetectorAccuracy : CIDetectorAccuracyHigh};
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:opts];
NSArray *features = [detector featuresInImage:ciImage];
return features;
}
...裁剪图片的代码如下所示:
-(UIImage *)imageCroppedToFaceAtIndex:(NSInteger)index forImage:(UIImage *)image {
NSArray *faces = [self facesForImage:image];
if((index < 0) || (index >= faces.count)) {
DDLogError(@"Invalid face index provided");
return nil;
}
CIFaceFeature *face = [faces objectAtIndex:index];
CGRect faceBounds = face.bounds;
CGImageRef imageRef = CGImageCreateWithImageInRect(image.CGImage, faceBounds);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
return croppedImage;
}
我有一张只有1张脸的图片,我正在使用它进行测试,它似乎没有问题地检测到它。但是这种作物很遥远。知道这个代码有什么问题吗?
答案 0 :(得分:3)
对于其他有类似问题的人 - 将CGImage
坐标转换为UIImage
坐标 - 我发现this great article解释了如何使用CGAffineTransform
完全实现我的目标寻找。
答案 1 :(得分:2)
将面部几何体从Core Image转换为UIImage坐标的代码非常繁琐。我在相当长的一段时间内没有弄乱它,但我记得它让我适应,特别是在处理旋转的图像时。
我建议查看演示应用程序“SquareCam”,您可以通过搜索Xcode文档找到它。它在脸部周围绘制红色方块,这是一个良好的开端。
请注意,您从Core Image获得的矩形始终是正方形,有时会过于紧密。您可能需要使裁剪矩形更高更宽。
答案 2 :(得分:0)
这个课就可以了! UIImage的一个非常灵活和方便的覆盖。 https://github.com/kylestew/KSMagicalCrop
答案 3 :(得分:-1)
使用此代码执行它对我有用。
CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];
//Container for the face attributes
UIView* faceContainer = [[UIView alloc] initWithFrame:facePicture.frame];
// flip faceContainer on y-axis to match coordinate system used by core image
[faceContainer setTransform:CGAffineTransformMakeScale(1, -1)];
// create a face detector - since speed is not an issue we'll use a high accuracy
// detector
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;
// create a UIView using the bounds of the face
UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
if(faceFeature.hasLeftEyePosition)
{
// create a UIView with a size based on the width of the face
leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasRightEyePosition)
{
// create a UIView with a size based on the width of the face
RightEyeView = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
}
if(faceFeature.hasMouthPosition)
{
// create a UIView with a size based on the width of the face
mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
}
[view addSubview:faceContainer];
CGFloat y = view.frame.size.height - (faceView.frame.origin.y+faceView.frame.size.height);
CGRect rect = CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect([<Original Image> CGImage],rect);
croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//----cropped Image-------//
UIImageView *img = [[UIImageView alloc]initWithFrame:CGRectMake(faceView.frame.origin.x, y, faceView.frame.size.width, faceView.frame.size.height)];
img.image = croppedImage;