CoreImage CIHueAdjust过滤器给出错误的颜色

时间:2017-12-21 02:15:29

标签: ios objective-c core-image

我正在使用Dondragmer' Rotated Hue'这个问题的例子:How to programmatically change the hue of UIImage?

然而,我得到的结果与我期望的结果截然不同。我希望色调变化能够在Photoshop中改变色调给出类似的结果,但这些变化并非完全相似。

为了说明,使用此源图像:

source image

Photoshop中的pi / 4旋转可以实现:

45 degree rotation Photoshop

使用Dondragmer的代码我得到:

45 degree rotation CIHueAdjust

同样,Photoshop中的180度旋转可以实现:

180 degree rotation Photoshop

但CIHueAdjust过滤器产生:

180 degree rotation CIHueAdjust

我的代码:

- (CGImageRef)changeHueOfImage(CGImageRef)source By:(NSInteger)angle
{
    CIImage *image = [CIImage imageWithCGImage:source];

    // Convert degrees to radians
    CGFloat angleRadians = GLKMathDegreesToRadians(angle);

    // Use the Core Image CIHueAdjust filter to change the hue
    CIFilter *hueFilter = [CIFilter filterWithName:@"CIHueAdjust"];
    [hueFilter setDefaults];
    [hueFilter setValue:image forKey:@"inputImage"];
    [hueFilter setValue:[NSNumber numberWithFloat:angleRadians] forKey:@"inputAngle"];
    image = [hueFilter outputImage];

    // Save the modified image
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef result = [context createCGImage:image fromRect:[image extent]];

    return result;
}

我的问题:

  1. 我误解了CIHueAdjust过滤器的用途吗?
  2. 我是否需要考虑过滤器的亮度和饱和度因素?
  3. 如何复制Photoshop行为
  4. 一般来说,为什么结果如此不同?

2 个答案:

答案 0 :(得分:3)

我在尝试更新CIImage的亮度时遇到了类似的问题。问题是CoreImage适用于RGB空间,修改色调,饱和度或亮度应在HSV空间内完成。

我最终做的是使用我发现here的这个片段来操纵每个像素。可能有更好的方法,但我创建了一个UIColor,其中包含每个像素的RGB值,从该颜色获取HSV值,更新了我想要更新的组件,创建了一个新的UIColor并使用该颜色的RGB值来修改图像。

CGImageRef imageRef = [img CGImage];
uint width = CGImageGetWidth(imageRef);
uint height = CGImageGetHeight(imageRef);
unsigned char *pixels = malloc(height*width*4); //1d array with size for every pixel. Each pixel has the components: Red,Green,Blue,Alpha

CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, 4*width, colorSpaceRef, kCGImageAlphaPremultipliedLast); //our quartz2d drawing env
CGColorSpaceRelease(colorSpaceRef);

CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);

for (int y=0;y<height;++y){
    for (int x=0;x<width;++x){
        int idx = (width*y+x)*4; //the index of pixel(x,y) in the 1d array pixels

        //Pixel manipulation here

        //red = pixels[idx]
        //green = piexls[idx+1]
        //blue = pixels[idx+2]
        //alpha = pixels[idx+3]

        //pixels[idx] = NEW_RED_VALUE

        //Please note that this assumes an image format with alpha stored in the least significant bit.
        //See kCGImageAlphaPremultipliedLast for more info. 
        //Change if needed and also update bitmapInfo provided to CGBitmapContextCreate
    }
}


imageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
free(pixels);

//load our new image
UIImage* newImg = [UIImage imageWithCGImage:imageRef];

注意:我没有编写此代码I found it here并将其粘贴到此处,以防删除要点。 All credit to bjorndagerman@github

如果您自己实施RGB -> HSV -> RGB转化,可能会获得更好的效果,如果您每秒多次尝试执行此过滤器,您会看到性能损失,但它会我最好的建议是,我找不到用HSV修改CoreImage空间中的色调,饱和度或亮度的方法。

更新:正如评论中提到的@dfd,你也可以为进行这些计算的过滤器编写一个自定义内核,它会快得多,但你需要谷歌如何将RGB转换为HSV并返回。

答案 1 :(得分:0)

我从来没有找到我的问题的答案:为什么CIHueAdjust不能给出预期的结果。但我确实想出了替代代码,它给了我想要的结果:

#define Mask8(x) ( (x) & 0xFF )
#define R(x) ( Mask8(x) )
#define G(x) ( Mask8(x >> 8 ) )
#define B(x) ( Mask8(x >> 16) )
#define A(x) ( Mask8(x >> 24) )
#define RGBAMake(r, g, b, a) ( Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24 )

- (CGImageRef)changeHueForImage:(CGImageRef)image by:(NSInteger)angle
{
    // Get size and allocate array of UIColors
    NSInteger width = CGImageGetWidth(image);
    NSInteger height = CGImageGetHeight(image);
    NSInteger count = width * height;

    // Draw image into pixel buffer
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    NSInteger bytesPerPixel = 4;
    NSInteger bitsPerComponent = 8;

    NSInteger bytesPerRow = bytesPerPixel * width;

    UInt32 *bitmapPixels = (UInt32 *)calloc(count, sizeof(UInt32));

    CGContextRef context = CGBitmapContextCreate(bitmapPixels, width, height,
                                                 bitsPerComponent, bytesPerRow, colorSpace,
                                                 kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);

    CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);

    // Shift hue of every pixel
    for (int i = 0; i < count; i++) {
        // Get pixel
        UInt32 pixel = bitmapPixels[i];

        // Convert to HSBA
        UIColor *color = [UIColor colorWithRed:R(pixel)/255.0 green:G(pixel)/255.0 blue:B(pixel)/255.0 alpha:A(pixel)/255.0];
        CGFloat h, s, br, a;
        [color getHue:&h saturation:&s brightness:&br alpha:&a];

        // Shift by angle
        CGFloat angleRadians = GLKMathDegreesToRadians(angle);
        h += angleRadians;

        // Make new color
        UIColor *newColor = [UIColor colorWithHue:h saturation:s brightness:br alpha:a];

        // Convert back to RGBA
        CGFloat r, g, b;
        [newColor getRed:&r green:&g blue:&b alpha:&a];

        // Set pixel
        bitmapPixels[i] = RGBAMake((UInt32)(r * 255.0), (UInt32)(g * 255.0), (UInt32)(b * 255.0), (UInt32)(a * 225.0));
    }

    // Create the new image and clean up
    CGImageRef newImg = CGBitmapContextCreateImage(context);
    CGColorSpaceRelease(colorSpace);
    CGContextRelease(context);
    free(bitmapPixels);

    return newImg;
 }

我对此解决方案并不感到兴奋;特别是,几乎可以肯定比依靠为每个像素创建一个新的UIColor更有效。但它会像预期的那样改变图像的色调。