我从相机拍摄图像(使用UIImagePickerController
)并将其保存到文档目录。
然后,我使用CIDetector API
和CIfacefeature API
在不同的视图控制器中获取这些图像以获取面部部分。
问题是虽然我能够正确地获取图像,但它根本没有检测到脸部。如果我在主包中存储相同的图像,它会检测到。
我不知道问题出在哪里??我尝试了一切。可能问题在于UIImage
,或者可能是图像保存在文档目录或相机中的格式。
请帮忙。我将感激你。
- (void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,
NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString* path = [documentsDirectory stringByAppendingPathComponent:
[NSString stringWithString: @"SampleImage.jpg"] ];
NSData* data = UIImageJPEGRepresentation(image, 0);
[data writeToFile:path atomically:YES];
[picker dismissModalViewControllerAnimated:YES];
FCVC *fcvc = [[FCVC alloc] initwithImage:image];
[self.navigationController pushViewController:fcvc animated:YES];
}
在FCVC的ViewDidLoad中,我通过传递以下函数调用函数:
-(void)markFaces:(UIImage *)pic
{
CIImage* image = [CIImage imageWithCGImage:pic.CGImage];
CGImageRef masterFaceImage;
CIDetector* detector = [CIDetector detectorOfType: CIDetectorTypeFace
context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];
// create an array containing all the detected faces from the detector
NSArray* features = [detector featuresInImage:image];
for(CIFaceFeature* faceFeature in features)
{
masterFaceImage = CGImageCreateWithImageInRect(facePicture.CGImage,CGRectMake(faceFeature.bounds.origin.x,faceFeature.bounds.origin.y, faceFeature.bounds.size.width,faceFeature.bounds.size.height));
}
self.masterExtractedFace = [UIImage imageWithCGImage:masterFaceImage];
}
先谢谢。
答案 0 :(得分:1)
对此的一个简单修复,如果你总是以肖像方式使用相机,就是添加这个小片段:
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
如果您需要动态找出方向,请确定您所处的方向。
检查kCGImagePropertyOrientation