再次,我很接近,但没有香蕉。
我正在尝试按照面部识别的一些教程。几乎有以下代码,但我认为有一些我不知道如何缩放并在面部周围放置UIImageView
边框。
我在照片库中的照片大小不一(出于某种莫名其妙的原因),所以我想象CIDetector
找到了面孔,我正在应用CGAffineTransforms
等等,试图将它们放在UIImageView
中。但是,正如你从图像中看到的那样(也在下面),它并没有被绘制到正确的位置。
UIImageView
为280x500,并设置为Scale to Fill。
任何有助于解决正在发生的事情的帮助都会很棒!
-(void)detectFaces {
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *image = [CIImage imageWithCGImage:_imagePhotoChosen.image.CGImage options:nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:context options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -_imagePhotoChosen.image.size.height);
NSArray *features = [detector featuresInImage:image];
NSLog(@"I have found %lu faces", (long unsigned)features.count);
for (CIFaceFeature *faceFeature in features)
{
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
NSLog(@"I have the original frame as: %@", NSStringFromCGRect(faceRect));
const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;
CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);
UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
NSLog(@"I have the bounds as: %@", NSStringFromCGRect(faceFrame));
faceView.layer.borderColor = [[UIColor redColor] CGColor];
faceView.layer.borderWidth = 1.0f;
[self.view addSubview:faceView];
}
}
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{
_imagePhotoChosen.image = info[UIImagePickerControllerOriginalImage];
//[_imagePhotoChosen sizeToFit];
[self.view addSubview:_viewChosenPhoto];
[picker dismissViewControllerAnimated:YES completion:nil];
[self detectFaces];
}
我已经离开了NSLog语句,因为我一直试图解决数学是错误的,但似乎无法看出它是不是!我也是一名数学老师......叹息......
再次感谢你能做的任何事情,指出我正确的方向。
回应那些想知道我是如何解决它的人......这对我来说真的是一个愚蠢的错误。
我将子视图添加到主窗口,而不是UIImageView
。因此我删除了该行:
[self.view addSubview:faceView];
并将其替换为:
[_imagePhotoChosen addSubview:faceView];
这允许将帧放置在正确的位置。接受的解决方案为我提供了线索!所以,更新后的代码(从那时起我就开始了一点:
-(void)detectFaces:(UIImage *)selectedImage {
_imagePhotoChosen.image = selectedImage;
CIImage *image = [CIImage imageWithCGImage:selectedImage.CGImage options:nil];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:@{CIDetectorAccuracy : CIDetectorAccuracyHigh}];
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -selectedImage.size.height);
NSArray *features = [detector featuresInImage:image];
int i = 0;
for (CIFaceFeature *faceFeature in features)
{
const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);
const CGFloat scaleWidth = _imagePhotoChosen.frame.size.width/_imagePhotoChosen.image.size.width;
const CGFloat scaleHeight = _imagePhotoChosen.frame.size.height/_imagePhotoChosen.image.size.height;
CGRect faceFrame = CGRectMake(faceRect.origin.x * scaleWidth, faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);
UIView *faceView = [[UIView alloc] initWithFrame:faceFrame];
faceView.layer.borderColor = [[UIColor redColor] CGColor];
faceView.layer.borderWidth = 1.0f;
faceView.tag = i;
UITapGestureRecognizer *selectPhotoTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(selectPhoto)];
selectPhotoTap.numberOfTapsRequired = 1;
selectPhotoTap.numberOfTouchesRequired = 1;
[faceView addGestureRecognizer:selectPhotoTap];
[_imagePhotoChosen addSubview:faceView];
i++;
}
}
答案 0 :(得分:2)
实际上你所做的完全正确,只需用
替换这一行 CGRect faceFrame = CGRectMake(_imagePhotoChosen.frame.origin.x+ faceRect.origin.x * scaleWidth,_imagePhotoChosen.frame.origin.y+ faceRect.origin.y * scaleHeight, faceRect.size.width * scaleWidth, faceRect.size.height * scaleHeight);