我正在使用CIPixellate
过滤器进行一些测试,我让它工作但是生成的图像大小不一。我认为这是有道理的,因为我改变了输入量表,但它不是我所期待的 - 我认为它会在图像的矩形内缩放。
我误解/使用过滤器是错误的还是我只需要将输出图像裁剪为我想要的尺寸。
另外,inputCenter
param在阅读标题/试错时并不清楚。任何人都可以解释那个参数是什么吗?
NSMutableArray * tmpImages = [[NSMutableArray alloc] init];
for (int i = 0; i < 10; i++) {
double scale = i * 4.0;
UIImage* tmpImg = [self applyCIPixelateFilter:self.faceImage withScale:scale];
printf("tmpImg width: %f height: %f\n", tmpImg.size.width, tmpImg.size.height);
[tmpImages addObject:tmpImg];
}
tmpImg width: 480.000000 height: 640.000000
tmpImg width: 484.000000 height: 644.000000
tmpImg width: 488.000000 height: 648.000000
tmpImg width: 492.000000 height: 652.000000
tmpImg width: 496.000000 height: 656.000000
tmpImg width: 500.000000 height: 660.000000
tmpImg width: 504.000000 height: 664.000000
tmpImg width: 508.000000 height: 668.000000
tmpImg width: 512.000000 height: 672.000000
tmpImg width: 516.000000 height: 676.000000
- (UIImage *)applyCIPixelateFilter:(UIImage*)fromImage withScale:(double)scale
{
/*
Makes an image blocky by mapping the image to colored squares whose color is defined by the replaced pixels.
Parameters
inputImage: A CIImage object whose display name is Image.
inputCenter: A CIVector object whose attribute type is CIAttributeTypePosition and whose display name is Center.
Default value: [150 150]
inputScale: An NSNumber object whose attribute type is CIAttributeTypeDistance and whose display name is Scale.
Default value: 8.00
*/
CIContext *context = [CIContext contextWithOptions:nil];
CIFilter *filter= [CIFilter filterWithName:@"CIPixellate"];
CIImage *inputImage = [[CIImage alloc] initWithImage:fromImage];
CIVector *vector = [CIVector vectorWithX:fromImage.size.width /2.0f Y:fromImage.size.height /2.0f];
[filter setDefaults];
[filter setValue:vector forKey:@"inputCenter"];
[filter setValue:[NSNumber numberWithDouble:scale] forKey:@"inputScale"];
[filter setValue:inputImage forKey:@"inputImage"];
CGImageRef cgiimage = [context createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:1.0f orientation:fromImage.imageOrientation];
CGImageRelease(cgiimage);
return newImage;
}
答案 0 :(得分:0)
有时inputScale不会均匀分割您的图像,这是我发现我得到不同大小的输出图像。
例如,如果inputScale = 0或1,则输出图像大小非常准确。
我发现图像周围的额外空间居中的方式不同而且不透明地#34;通过 inputCenter 。也就是说,我还没有花时间弄清楚究竟是多么准确(我通过点击位置设置了它)。
我对不同尺寸的解决方案是将图像重新渲染到输入图像大小的范围内,我使用Apple Watch的黑色背景。
CIFilter *pixelateFilter = [CIFilter filterWithName:@"CIPixellate"];
[pixelateFilter setDefaults];
[pixelateFilter setValue:[CIImage imageWithCGImage:editImage.CGImage] forKey:kCIInputImageKey];
[pixelateFilter setValue:@(amount) forKey:@"inputScale"];
[pixelateFilter setValue:vector forKey:@"inputCenter"];
CIImage* result = [pixelateFilter valueForKey:kCIOutputImageKey];
CIContext *context = [CIContext contextWithOptions:nil];
CGRect extent = [pixelateResult extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];
UIGraphicsBeginImageContextWithOptions(editImage.size, YES, [editImage scale]);
CGContextRef ref = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ref, 0, editImage.size.height);
CGContextScaleCTM(ref, 1.0, -1.0);
CGContextSetFillColorWithColor(ref, backgroundFillColor.CGColor);
CGRect drawRect = (CGRect){{0,0},editImage.size};
CGContextFillRect(ref, drawRect);
CGContextDrawImage(ref, drawRect, cgImage);
UIImage* filledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
returnImage = filledImage;
CGImageRelease(cgImage);
如果你要坚持实施,我建议至少改变你提取UIImage的方式来使用&#39; scale&#39;原始图像,不要与CIFilter量表混淆。
UIImage *newImage = [UIImage imageWithCGImage:cgiimage scale:fromImage.scale orientation:fromImage.imageOrientation];
答案 1 :(得分:0)
问题仅在于扩展。
简单地做:
let result = UIImage(cgImage: cgimgresult!, scale: (originalImageView.image?.scale)!, orientation: (originalImageView.image?.imageOrientation)!)
originalImageView.image = result
答案 2 :(得分:0)
如Apple Core Image Programming Guide和this post中所述,
默认情况下,模糊滤镜还可以通过模糊图像像素以及(在滤镜的图像处理空间中的)透明像素模糊图像来柔化图像的边缘
因此,您的输出图像根据您的比例而有所不同。
对于inputCenter,如Joshua Sullivan在此post on CIFilter的注释中所述,“它调整像素网格与源图像的偏移”。 因此,如果inputCenter坐标不是CI Pixellate inputScale的倍数,它将稍微偏移像素正方形(大多数情况下在inputScale较大的值上可见)。