通过面部检测进行裁剪

时间:2013-06-24 01:26:38

标签: ios

我正在修改Apple SquareCam示例面部检测应用程序,以便在写入相机胶卷之前将面部裁剪掉,而不是在面部周围绘制红色正方形。我正在使用相同的CGRect进行裁剪,就像用于绘制红色正方形一样。然而,行为是不同的。在纵向模式下,如果面部位于屏幕的水平中心,则会按预期裁剪面部(红色正方形所在的位置)。如果脸部偏向左侧或右侧,则作物似乎总是从屏幕中间拍摄,而不是红色正方形。

这是苹果原始代码:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;
    CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(backgroundImage), CGImageGetHeight(backgroundImage));
    CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
    CGContextClearRect(bitmapContext, backgroundImageRect);
    CGContextDrawImage(bitmapContext, backgroundImageRect, backgroundImage);
    CGFloat rotationDegrees = 0.;

    switch (orientation) {
        case UIDeviceOrientationPortrait:
            rotationDegrees = -90.;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            rotationDegrees = 90.;
            break;
        case UIDeviceOrientationLandscapeLeft:
            if (isFrontFacing) rotationDegrees = 180.;
            else rotationDegrees = 0.;
            break;
        case UIDeviceOrientationLandscapeRight:
            if (isFrontFacing) rotationDegrees = 0.;
            else rotationDegrees = 180.;
            break;
        case UIDeviceOrientationFaceUp:
        case UIDeviceOrientationFaceDown:
        default:
            break; // leave the layer in its last known orientation
    }
    UIImage *rotatedSquareImage = [square imageRotatedByDegrees:rotationDegrees];

    // features found by the face detector
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));
        CGContextDrawImage(bitmapContext, faceRect, [rotatedSquareImage CGImage]);
    }
    returnImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease (bitmapContext);

    return returnImage;
}

和我的替代人员:

- (CGImageRef)newSquareOverlayedImageForFeatures:(NSArray *)features 
                                            inCGImage:(CGImageRef)backgroundImage 
                                      withOrientation:(UIDeviceOrientation)orientation 
                                          frontFacing:(BOOL)isFrontFacing
{
    CGImageRef returnImage = NULL;

    //I'm only taking pics with one face. This is just for testing
    for ( CIFaceFeature *ff in features ) {
        CGRect faceRect = [ff bounds];
        returnImage = CGImageCreateWithImageInRect(backgroundImage, faceRect);
    }

    return returnImage;
}

更新 *

基于Wains输入,我试图使我的代码更像原始代码,但结果是一样的:

- (NSArray*)extractFaceImages:(NSArray *)features
              fromCGImage:(CGImageRef)sourceImage
          withOrientation:(UIDeviceOrientation)orientation
              frontFacing:(BOOL)isFrontFacing
{
 NSMutableArray *faceImages = [[[NSMutableArray alloc] initWithCapacity:1] autorelease];


CGImageRef returnImage = NULL;
CGRect backgroundImageRect = CGRectMake(0., 0., CGImageGetWidth(sourceImage), CGImageGetHeight(sourceImage));
CGContextRef bitmapContext = CreateCGBitmapContextForSize(backgroundImageRect.size);
CGContextClearRect(bitmapContext, backgroundImageRect);
CGContextDrawImage(bitmapContext, backgroundImageRect, sourceImage);
CGFloat rotationDegrees = 0.;

switch (orientation) {
    case UIDeviceOrientationPortrait:
        rotationDegrees = -90.;
        break;
    case UIDeviceOrientationPortraitUpsideDown:
        rotationDegrees = 90.;
        break;
    case UIDeviceOrientationLandscapeLeft:
        if (isFrontFacing) rotationDegrees = 180.;
        else rotationDegrees = 0.;
        break;
    case UIDeviceOrientationLandscapeRight:
        if (isFrontFacing) rotationDegrees = 0.;
        else rotationDegrees = 180.;
        break;
    case UIDeviceOrientationFaceUp:
    case UIDeviceOrientationFaceDown:
    default:
        break; // leave the layer in its last known orientation
}

// features found by the face detector
for ( CIFaceFeature *ff in features ) {
    CGRect faceRect = [ff bounds];

    NSLog(@"faceRect=%@", NSStringFromCGRect(faceRect));

    returnImage = CGBitmapContextCreateImage(bitmapContext);
    returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);
    UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
    [faceImages addObject:clippedFace];
}

CGContextRelease (bitmapContext);

return faceImages;

}

我拍了三张照片,并用这些结果记录了faceRect;

在脸部位于设备左边缘附近拍摄的图片。捕捉图像完全错过了右边的脸: faceRect = {{972,43.0312},{673.312,673.312}}

在面部位于设备中间的情况下拍摄的Pic。捕获图像很好: faceRect = {{1060.59,536.625},{668.25,668.25}}

使用脸部位于设备右边缘附近拍摄的图片。捕捉图像完全错过了左侧的脸: faceRect = {{982.125,999.844},{804.938,804.938}}

所以似乎“x”和“y”相反。我正在拿着设备的肖像,但faceRect似乎是基于风景的。但是,我无法弄清楚苹果原始代码的哪一部分可以解释这一点。该方法中的方向代码似乎仅影响红色方块叠加图像本身。

2 个答案:

答案 0 :(得分:3)

你应该保留所有的原始代码,并在返回之前添加一行(通过调整将图像生成放在循环中,因为你只裁剪第一个面):

returnImage = CGImageCreateWithImageInRect(returnImage, faceRect);

这样可以使用正确的方向渲染图像,这意味着面部矩形将位于正确的位置。

答案 1 :(得分:0)

您遇到此问题,因为保存图像时,会垂直翻转保存图像。并且faceRect的位置与脸部并不完全一致。您可以通过修改faceRect的问题来解决此问题,使其在returnImage内垂直翻转。

    for ( CIFaceFeature *ff in features ) {
        faceRect  = [ff bounds];
        CGRect modifiedRect = CGRectFlipVertical(faceRect,CGRectMake(0,0,CGImageGetWidth(returnImage),CGImageGetHeight(returnImage)));
        returnImage = CGImageCreateWithImageInRect(returnImage, modifiedRect);
        UIImage *clippedFace = [UIImage imageWithCGImage:returnImage];
        [faceImages addObject:clippedFace];
    }

CGRectFlipVertical(CGRect innerRect, CGRect outerRect)可以像这样定义,

    CGRect CGRectFlipVertical(CGRect innerRect, CGRect outerRect)
    {
      CGRect rect = innerRect;
      rect.origin.y = outerRect.origin.y + outerRect.size.height - (rect.origin.y + rect.size.height);
      return rect;
    }