我正在尝试为我的应用中的图片开发一些彩色滤镜。我们使用查找图像,因为它们易于用于从其他程序复制过滤器,并且方便跨平台获得相同的结果。我们的查找看起来像这样
以前,我们使用GPUImage
来过滤查找图像,但我想避免这种依赖,因为它高达5.4mb,我们只需要这个功能。
搜索了几个小时之后,我似乎无法找到任何资源来了解如何使用查找图像来过滤CoreImage
的图像。但是,查看文档,CIColorMatrix
看起来像是正确的工具。这里的问题是,我太愚蠢了解它是如何工作的。这让我想到了我的问题;
有没有人有一个如何使用CIColorMatrix
从查找过滤图像的示例?(或任何有关我应该如何着手自行解决的指示)
我已经删除了GPUImage
代码,看起来他们用来从查找图像中过滤的着色器定义如下:
查找图片着色器:
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
// lookup texture uniform float intensity;
void main() {
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
float blueColor = textureColor.b * 63.0;
vec2 quad1;
quad1.y = floor(floor(blueColor) / 8.0);
quad1.x = floor(blueColor) - (quad1.y * 8.0);
vec2 quad2;
quad2.y = floor(ceil(blueColor) / 8.0);
quad2.x = ceil(blueColor) - (quad2.y * 8.0);
vec2 texPos1;
texPos1.x = (quad1.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos1.y = (quad1.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec2 texPos2;
texPos2.x = (quad2.x * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.r);
texPos2.y = (quad2.y * 0.125) + 0.5/512.0 + ((0.125 - 1.0/512.0) * textureColor.g);
vec4 newColor1 = texture2D(inputImageTexture2, texPos1);
vec4 newColor2 = texture2D(inputImageTexture2, texPos2);
vec4 newColor = mix(newColor1, newColor2, fract(blueColor));
gl_FragColor = mix(textureColor, vec4(newColor.rgb, textureColor.w), intensity);
}
除了这个顶点着色器:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main() {
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
我可以/我应该使用这些着色器创建自己的过滤器吗?
答案 0 :(得分:0)
这个答案的所有功劳归于Nghia Tran。如果你看到这个,谢谢你!
事实证明,一直有一个单一的答案。 Nghia Tran撰写了一篇文章here,在那里他解决了我的确切用例。
他慷慨地提供了一个扩展,用于从查找图像生成CIFilter
,我将在下面粘贴以便为将来的开发人员保留这个答案。
如果您使用的是Swift,则需要在桥接标头中导入CIFilter+LUT.h
。
这是一个片段,演示了在 Swift 4 中在GPU上使用它的情况。这远非优化,应该缓存上下文等,但它是一个很好的起点。
static func applyFilter(with lookupImage: UIImage, to image: UIImage) -> UIImage? {
guard let cgInputImage = image.cgImage else {
return nil
}
guard let glContext = EAGLContext(api: .openGLES2) else {
return nil
}
let ciContext = CIContext(eaglContext: glContext)
guard let lookupFilter = CIFilter(lookupImage: lookupImage, dimension: 64) else {
return nil
}
lookupFilter.setValue(CIImage(cgImage: cgInputImage),
forKey: "inputImage")
guard let output = lookupFilter.outputImage else {
return nil
}
guard let cgOutputImage = ciContext.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgOutputImage)
}
CIFilter + LUT.h
#import <CoreImage/CoreImage.h>
@import UIKit.UIImage;
@class CIFilter;
@interface CIFilter (LUT)
+(CIFilter *) filterWithLookupImage:(UIImage *)image dimension:(NSInteger) n;
@end
<强> CIFilter + LUT.m 强>
#import "CIFilter+LUT.h"
#import <CoreImage/CoreImage.h>
#import <OpenGLES/EAGL.h>
@implementation CIFilter (LUT)
+(CIFilter *)filterWithLookupImage:(UIImage *)image dimension:(NSInteger)n {
NSInteger width = CGImageGetWidth(image.CGImage);
NSInteger height = CGImageGetHeight(image.CGImage);
NSInteger rowNum = height / n;
NSInteger columnNum = width / n;
if ((width % n != 0) || (height % n != 0) || (rowNum * columnNum != n)) {
NSLog(@"Invalid colorLUT");
return nil;
}
unsigned char *bitmap = [self createRGBABitmapFromImage:image.CGImage];
if (bitmap == NULL) {
return nil;
}
NSInteger size = n * n * n * sizeof(float) * 4;
float *data = malloc(size);
int bitmapOffest = 0;
int z = 0;
for (int row = 0; row < rowNum; row++) {
for (int y = 0; y < n; y++) {
int tmp = z;
for (int col = 0; col < columnNum; col++) {
for (int x = 0; x < n; x++) {
float r = (unsigned int)bitmap[bitmapOffest];
float g = (unsigned int)bitmap[bitmapOffest + 1];
float b = (unsigned int)bitmap[bitmapOffest + 2];
float a = (unsigned int)bitmap[bitmapOffest + 3];
NSInteger dataOffset = (z*n*n + y*n + x) * 4;
data[dataOffset] = r / 255.0;
data[dataOffset + 1] = g / 255.0;
data[dataOffset + 2] = b / 255.0;
data[dataOffset + 3] = a / 255.0;
bitmapOffest += 4;
}
z++;
}
z = tmp;
}
z += columnNum;
}
free(bitmap);
CIFilter *filter = [CIFilter filterWithName:@"CIColorCube"];
[filter setValue:[NSData dataWithBytesNoCopy:data length:size freeWhenDone:YES] forKey:@"inputCubeData"];
[filter setValue:[NSNumber numberWithInteger:n] forKey:@"inputCubeDimension"];
return filter;
}
+ (unsigned char *)createRGBABitmapFromImage:(CGImageRef)image {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
unsigned char *bitmap;
NSInteger bitmapSize;
NSInteger bytesPerRow;
size_t width = CGImageGetWidth(image);
size_t height = CGImageGetHeight(image);
bytesPerRow = (width * 4);
bitmapSize = (bytesPerRow * height);
bitmap = malloc(bitmapSize);
if (bitmap == NULL) {
return NULL;
}
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
free(bitmap);
return NULL;
}
context = CGBitmapContextCreate (bitmap, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
free (bitmap);
}
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return bitmap;
}
@end