我正在尝试在iOS 5中使用CoreImage的人脸检测,但它没有检测到任何东西。我正在尝试使用此代码检测刚刚被相机捕获的图像中的面部:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
UIImage *image = [info objectForKey:@"UIImagePickerControllerOriginalImage"];
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];
NSArray *features = [faceDetector featuresInImage:image.CIImage];
NSLog(@"Features = %@", features);
[self dismissModalViewControllerAnimated:YES];
}
这个编译并运行正常,但无论图像中有什么,feature数组总是空的......有什么想法吗?
答案 0 :(得分:22)
我不能直接回复你的@ 14:52评论Vic320,但是我一直在用前置摄像头进行人脸检测 - 我绕圈转了一圈,因为我无法让前置摄像头选中我的脸一直在......
事实证明它对旋转非常敏感 - 我注意到当我拿着我的iPad2时(正如你在使用前置摄像头时所期望的那样)我的识别准确度不到10%。一时兴起,将它转向侧面并使用前置摄像头获得100%识别。
如果你总是以纵向使用前置摄像头,那么简单的解决方法是添加这个小片段:
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
那里的那6个迫使探测器以纵向模式操作。 Apple的SquareCam示例有一大堆实用方法,可以根据您的需要动态计算出方向来确定您的方向。
答案 1 :(得分:5)
好的,仔细阅读文档总是有帮助的。在UIImage文档中,在CIImage属性下,它说:“如果使用CGImageRef初始化UIImage对象,则属性的值为nil。”显然,UIImagePickerController确实从CGImageRef初始化图像,因为这个属性确实是零。要使上述代码有效,您需要添加:
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
并更改此行:
NSArray *features = [faceDetector featuresInImage:ciImage];
我注意到的另一件大事是,静态图像中的人脸检测并不能真正起作用于前置摄像头的低分辨率图像!每当我使用背部高分辨率相机时它都可以工作。也许该算法针对高分辨率进行了调整......
答案 2 :(得分:4)
试试以下内容。假设您在图像变量中加载照片:
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:fOptions];
for (CIFaceFeature *f in features) {
NSLog(@"left eye found: %@", (f. hasLeftEyePosition ? @"YES" : @"NO"));
NSLog(@"right eye found: %@", (f. hasRightEyePosition ? @"YES" : @"NO"));
NSLog(@"mouth found: %@", (f. hasMouthPosition ? @"YES" : @"NO"));
if(f.hasLeftEyePosition)
NSLog(@"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);
if(f.hasRightEyePosition)
NSLog(@"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);
if(f.hasMouthPosition)
NSLog(@"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);
}
答案 3 :(得分:1)
以上答案都不适用于我(ios 8.4)ipad mini& ipad air 2
我有与robwormald相同的观察结果。当iPad旋转时,面部检测工作正常,因此我旋转了ciImage:)
let ciImage = CIImage(CVPixelBuffer: pixelBuffer, options: attachments)
let angle = CGFloat(-M_PI/2)
let rotatedImage = ciImage.imageByApplyingTransform(CGAffineTransformMakeRotation(angle))