在组合jpg和alpha后删除遮罩(alpha bleed)以在iOS中获取png图像

时间:2012-10-29 10:29:27

标签: ios png quartz-graphics alpha jpeg

为了减小应用程序的大小,我在png图像上使用jpg和alpha。我能够合并jpg和alpha图像来获得png,但问题是它留下了一个alpha流失(遮罩),其边缘很小。请帮我解决一下这个。

下面我编写的代码帮助我从jpg和alpha图像中获取png图像。请帮我摆脱alpha流血(哑光)。谢谢。

+ (UIImage*)pngImageWithJPEG:(UIImage*)jpegImage compressedAlphaFile:(UIImage*)alphaImage
{
CGRect imageRect = CGRectMake(0, 0, jpegImage.size.width, jpegImage.size.height);

//Pixel Buffer
uint32_t* piPixels = (uint32_t*)malloc(imageRect.size.width * imageRect.size.height * sizeof(uint32_t));

memset(piPixels, 0, imageRect.size.width * imageRect.size.height * sizeof(uint32_t));

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(piPixels, imageRect.size.width, imageRect.size.height, 8, sizeof(uint32_t) * imageRect.size.width, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);

//Drawing the alphaImage to the pixel buffer to seperate the alpha values
CGContextDrawImage(context, imageRect, alphaImage.CGImage);

//Buffer to store the alpha values from the alphaImage
uint8_t* piAlpha = (uint8_t*)malloc(sizeof(uint8_t) * imageRect.size.width * imageRect.size.height);

//Copying the alpha values from the alphaImage to the alpha buffer
for (uint32_t y = 0; y < imageRect.size.height; y++) 
{
    for (uint32_t x = 0; x < imageRect.size.width; x++)
    {
        uint8_t* piRGBAPixel = (uint8_t*)&piPixels[y * (uint32_t)imageRect.size.width + x];

        //alpha = 0, red = 1, green = 2, blue = 3.
        piAlpha[y * (uint32_t)imageRect.size.width + x] = piRGBAPixel[1];
    }
}

//Drawing the jpegImage in the pixel buffer.
CGContextDrawImage(context, imageRect, jpegImage.CGImage);

//Setting alpha to the jpegImage
for (uint32_t y = 0; y < imageRect.size.height; y++) 
{
    for (uint32_t x = 0; x < imageRect.size.width; x++)
    {
        uint8_t* piRGBAPixel = (uint8_t*)&piPixels[y * (uint32_t)imageRect.size.width + x];

        float fAlpha0To1 = piAlpha[y * (uint32_t)imageRect.size.width + x] / 255.0f;

        //alpha = 0, red = 1, green = 2, blue = 3.
        piRGBAPixel[0] = piAlpha[y * (uint32_t)imageRect.size.width + x];
        piRGBAPixel[1] *= fAlpha0To1;
        piRGBAPixel[2] *= fAlpha0To1;
        piRGBAPixel[3] *= fAlpha0To1;
    }
}

//Creating image from the pixel buffer
CGImageRef cgImage = CGBitmapContextCreateImage(context);

//Releasing resources
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(piPixels);
free(piAlpha);

//Creating the pngImage to return from the cgImage
UIImage* pngImage = [UIImage imageWithCGImage:cgImage];

//Releasing the cgImage.
CGImageRelease(cgImage);

return pngImage;
}

1 个答案:

答案 0 :(得分:0)

预乘alpha色彩空间可以更好地处理色彩流失。尝试预处理像这样的源像素:

r = r*a;
g = g*a;
b = b*a;

要获得原始值,您通常会在阅读图像后将其反转(如果a> 0),则将RGB值除以a,但由于iOS本身可以使用预乘的Alpha颜色,你甚至不必这样做。


同时尝试PNG8+Alpha格式,有时可能比单独的JPG +掩码小。