CIFaceFeature的坐标系

时间:2013-08-07 13:09:10

标签: iphone ios face-detection

我正在使用CIFeature类参考进行人脸检测,而且我对核心图形坐标和常规UIKit坐标有点困惑。这是我的代码:

    UIImage *mainImage = [UIImage imageNamed:@"facedetectionpic.jpg"];

CIImage *image = [[CIImage alloc] initWithImage:mainImage];
NSDictionary *options = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:options];
NSArray *features = [detector featuresInImage:image];

CGRect faceRect;

for (CIFaceFeature *feature in features)
{
    faceRect= [feature bounds];

}

这是非常标准的。现在根据官方documentation说:

bounds包含已发现要素的矩形。 (只读)

讨论矩形位于图像的坐标系中。

当我直接输出FaceRect时,我得到:get rect {{136,427},{46,46}}。当我应用CGAffineTransfer以正确的方式翻转它时,我得到的负坐标似乎不对。我正在使用的图像是在ImageView中。

那么这些坐标在哪个坐标系中呢?图片? ImageView?核心图形坐标?常规坐标?

1 个答案:

答案 0 :(得分:3)

我终于明白了。正如文档所指出的那样,CIFaceFeature绘制的矩形位于图像的坐标系中。这意味着矩形具有原始图像的坐标。如果选中了“自动调整大小”选项,则意味着缩小图像以适合UIImageView。因此,您需要将旧图像坐标转换为新的图像坐标。

我从here改编的这段代码很适合你:

- (CGPoint)convertPointFromImage:(CGPoint)imagePoint {

 CGPoint viewPoint = imagePoint;

 CGSize imageSize = self.setBody.image.size;
 CGSize viewSize  = self.setBody.bounds.size;

 CGFloat ratioX = viewSize.width / imageSize.width;
 CGFloat ratioY = viewSize.height / imageSize.height;

 UIViewContentMode contentMode = self.setBody.contentMode;

  if (contentMode == UIViewContentModeScaleAspectFit)
  {
     if (contentMode == UIViewContentModeScaleAspectFill)
     {
         CGFloat scale;

         if (contentMode == UIViewContentModeScaleAspectFit) {
            scale = MIN(ratioX, ratioY);
         }
         else /*if (contentMode == UIViewContentModeScaleAspectFill)*/ {
            scale = MAX(ratioX, ratioY);
        }

        viewPoint.x *= scale;
        viewPoint.y *= scale;

        viewPoint.x += (viewSize.width  - imageSize.width  * scale) / 2.0f;
        viewPoint.y += (viewSize.height - imageSize.height * scale) / 2.0f;

    }
 }
return viewPoint;
}