UIImagePNGRepresentation和蒙版图像

时间:2010-11-15 21:51:39

标签: iphone quartz-graphics uiimagepngrepresentation

  1. 使用function形成了一个iphone博客创建了一个蒙面图片

    UIImage * imgToSave = [self maskImage:[UIImage imageNamed:@“pic.jpg”] withMask:[UIImage imageNamed:@“sd-face-mask.png”]];

  2. 在UIImageView中看起来不错

    UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
    imgView.center = CGPointMake(160.0f, 140.0f);
    [self.view addSubview:imgView];
    
  3. UIImagePNGRepresentation保存到磁盘

    [UIImagePNGRepresentation(imgToSave)writeToFile:[self findUniqueSavePath] atomically:YES];

  4. UIImagePNGRepresentation 返回看起来不同的图像的NSData。

    输出是反转图像掩码。 现在可以在文件中看到应用程序中剪切的区域。 现在删除了应用程序中可见的区域。可见性相反

    我的面具旨在除去图片中的脸部区域以外的所有内容。 UIImage在应用程序中看起来正确,但在我将其保存在磁盘上后,文件看起来相反。面部被移除,但其他一切都在那里。

    如果你能提供帮助,请告诉我!

2 个答案:

答案 0 :(得分:3)

在石英中,您可以通过图像蒙版(黑色通过和白色块)或正常图像(白色通过和黑色块)进行相应的掩模。似乎由于某种原因,保存将图像蒙版视为要掩盖的普通图像。一种想法是渲染到位图上下文,然后创建一个要从中保存的图像。

答案 1 :(得分:0)

我有完全相同的问题,当我保存文件时它是单向的,但内存中返回的图像正好相反。

罪魁祸首&解决方案是UIImagePNGRepresentation()。它修复了应用程序内的图像,然后将其保存到磁盘,所以我只是插入该函数作为创建蒙版图像并返回该图像的最后一步。

这可能不是最优雅的解决方案,但它确实有效。我从我的应用程序中复制了一些代码并将其压缩,不确定下面的代码是否正常工作,但如果没有,它的关闭...可能只是一些错别字。

享受。 :)

// MyImageHelperObj.h

@interface MyImageHelperObj : NSObject

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;

@end





// MyImageHelperObj.m

#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"


@implementation MyImageHelperObj


+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
    // create image size rect
    CGRect newRect = CGRectZero;
    newRect.size = newSize;

    // draw source image
    UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
    [sourceImage drawInRect:newRect];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

    // draw mask image
    [maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    // create grayscale version of mask image to make the "image mask"
    UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
    CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
    CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
    CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
    CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
    CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
    CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);

    CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
    CGImageRelease(imageMask);
    newImage = [UIImage imageWithCGImage:maskedImage];
    CGImageRelease(maskedImage);
    return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}

+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
    //create gray device colorspace.
    CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
    //create 8-bit bimap context without alpha channel.
    CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
    CGColorSpaceRelease(space);
    //Draw image.
    CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
    CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
    //Get image from bimap context.
    CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
    CGContextRelease(bitmapContext);
    //image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
    UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
    CGImageRelease(grayScaleImage);
    return image;
}

@end