我想弄清楚如何转换CGPoint
返回的CIFaceFeature
结果,以便在CALayer
中使用它们进行绘制。之前我已经将我的图像标准化为0旋转,以便使事情变得更容易,但这会导致使用横向模式下设备拍摄的图像出现问题。
我一直在这方面工作一段时间没有成功,我不确定我对任务的理解是否不正确,或者我的方法是否不正确,或两者兼而有之。这是我认为正确的:
根据CIDetector
featuresInImage:options:
方法的文档
A dictionary that specifies the orientation of the image. The detection is
adjusted to account for the image orientation but the coordinates in the
returned feature objects are based on those of the image.
在下面的代码中,我试图旋转CGPoint,以便通过覆盖UIImageView的CAShape图层绘制它。
我正在做什么(......或者我认为我在......)将左眼CGPoint平移到视图的中心,旋转90度,然后将点转换回原来的位置。这不正确但我不知道我哪里出错了。这是我的方法错误还是我实施它的方式?
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)
- leftEyePosition是一个CGPoint
CGAffineTransform transRot = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(90));
float x = self.center.x;
float y = self.center.y;
CGAffineTransform tCenter = CGAffineTransformMakeTranslation(-x, -y);
CGAffineTransform tOffset = CGAffineTransformMakeTranslation(x, y);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tCenter);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, transRot);
leftEyePosition = CGPointApplyAffineTransform(leftEyePosition, tOffset);
从这篇文章:https://stackoverflow.com/a/14491293/840992,我需要根据imageOrientation进行轮换
取向
Apple / UIImage.imageOrientation Jpeg / File kCGImagePropertyOrientation
UIImageOrientationUp = 0 = Landscape left = 1 UIImageOrientationDown = 1 = Landscape right = 3 UIImageOrientationLeft = 2 = Portrait down = 8 UIImageOrientationRight = 3 = Portrait up = 6
消息由skinnyTOD在2013年2月1日下午4:09编辑
答案 0 :(得分:6)
我需要找出完全相同的问题。 Apple样本“SquareCam”直接在视频输出上运行,但我还需要来自UIImage的结果。所以我用一些转换方法扩展了CIFaceFeature类,以获得关于UIImage及其UIImageView(或UIView的CALayer)的正确点位置和边界。完整的实施发布在此处:https://gist.github.com/laoyang/5747004。你可以直接使用。
以下是来自CIFaceFeature的点的最基本转换,返回的CGPoint根据图像的方向进行转换:
- (CGPoint) pointForImage:(UIImage*) image fromPoint:(CGPoint) originalPoint {
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGPoint convertedPoint;
switch (image.imageOrientation) {
case UIImageOrientationUp:
convertedPoint.x = originalPoint.x;
convertedPoint.y = imageHeight - originalPoint.y;
break;
case UIImageOrientationDown:
convertedPoint.x = imageWidth - originalPoint.x;
convertedPoint.y = originalPoint.y;
break;
case UIImageOrientationLeft:
convertedPoint.x = imageWidth - originalPoint.y;
convertedPoint.y = imageHeight - originalPoint.x;
break;
case UIImageOrientationRight:
convertedPoint.x = originalPoint.y;
convertedPoint.y = originalPoint.x;
break;
case UIImageOrientationUpMirrored:
convertedPoint.x = imageWidth - originalPoint.x;
convertedPoint.y = imageHeight - originalPoint.y;
break;
case UIImageOrientationDownMirrored:
convertedPoint.x = originalPoint.x;
convertedPoint.y = originalPoint.y;
break;
case UIImageOrientationLeftMirrored:
convertedPoint.x = imageWidth - originalPoint.y;
convertedPoint.y = originalPoint.x;
break;
case UIImageOrientationRightMirrored:
convertedPoint.x = originalPoint.y;
convertedPoint.y = imageHeight - originalPoint.x;
break;
default:
break;
}
return convertedPoint;
}
以下是基于上述转换的类别方法:
// Get converted features with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image;
- (CGPoint) rightEyePositionForImage:(UIImage *)image;
- (CGPoint) mouthPositionForImage:(UIImage *)image;
- (CGRect) boundsForImage:(UIImage *)image;
// Get normalized features (0-1) with respect to the imageOrientation property
- (CGPoint) normalizedLeftEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedRightEyePositionForImage:(UIImage *)image;
- (CGPoint) normalizedMouthPositionForImage:(UIImage *)image;
- (CGRect) normalizedBoundsForImage:(UIImage *)image;
// Get feature location inside of a given UIView size with respect to the imageOrientation property
- (CGPoint) leftEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) rightEyePositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGPoint) mouthPositionForImage:(UIImage *)image inView:(CGSize)viewSize;
- (CGRect) boundsForImage:(UIImage *)image inView:(CGSize)viewSize;
(需要注意的另一件事是从UIImage方向提取面部特征时指定正确的EXIF方向。相当令人困惑......这就是我所做的:
int exifOrientation;
switch (self.image.imageOrientation) {
case UIImageOrientationUp:
exifOrientation = 1;
break;
case UIImageOrientationDown:
exifOrientation = 3;
break;
case UIImageOrientationLeft:
exifOrientation = 8;
break;
case UIImageOrientationRight:
exifOrientation = 6;
break;
case UIImageOrientationUpMirrored:
exifOrientation = 2;
break;
case UIImageOrientationDownMirrored:
exifOrientation = 4;
break;
case UIImageOrientationLeftMirrored:
exifOrientation = 5;
break;
case UIImageOrientationRightMirrored:
exifOrientation = 7;
break;
default:
break;
}
NSDictionary *detectorOptions = @{ CIDetectorAccuracy : CIDetectorAccuracyHigh };
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];
NSArray *features = [faceDetector featuresInImage:[CIImage imageWithCGImage:self.image.CGImage]
options:@{CIDetectorImageOrientation:[NSNumber numberWithInt:exifOrientation]}];
)
答案 1 :(得分:0)
我认为你需要翻转找到的关于图像水平中心轴的面部坐标
你可以尝试这种转变:
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, 0.0f, image.size.height);
transform = CGAffineTransformScale(transform, 1.0f, -1.0f);
[path applyTransform:transform];
此变换仅在我们在找到面之前将image.imageOrientation设置为0时才有效。