我有以下代码来使用CIDetector
获取面部特征:
CGImageRef rotatedImage = [self rotate90Degree:image];
CIImage *ciImage = [CIImage imageWithCGImage:rotatedImage];
CIContext *context = [CIContext context];
NSDictionary *opts = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh,
CIDetectorEyeBlink : @YES,
CIDetectorImageOrientation : @1 };
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
context:context
options:opts];
NSArray *features = [detector featuresInImage:ciImage options:opts];
然后我检测眼睛是否像这样闭着:
for (CIFaceFeature* ciFace in features) {
NSLog(@"Left: %@ - Right %@", ciFace.leftEyeClosed, ciFace.rightEyeClosed);
if (ciFace && ciFace.leftEyeClosed != NULL && ciFace.rightEyeClosed != NULL) {
if (eye == 1) { // right eye closed
return ciFace.leftEyeClosed && !ciFace.rightEyeClosed;
} else {
return ciFace.rightEyeClosed && !ciFace.leftEyeClosed;
}
}
}
但是,如果执行null检查并且我的输出始终为Left: (null) - Right (null)
我将同一图像用于VNFaceObservations,并且效果很好。我尝试过沿各个方向旋转图像,并且我尝试使用CIDetectorImageOrientation
字段将每个旋转后的图像都设为1-8的值
在对数百张图像进行测试之后,一些面孔以及旋转和方向的每种可能的组合-我仍然对这两个功能都保持为空。
有什么我想念的吗?