我们正在少量图像上应用'CIGaussianBlur'过滤器。这个过程大部分时间都很好。但是当应用程序移动到后台时,该过程会在图像上产生白色条纹。 (下图中,请注意图像的左下角条纹为白色,与原始图像相比,图像有点尖叫。)
守则:
- (UIImage*)imageWithBlurRadius:(CGFloat)radius
{
UIImage *image = self;
LOG(@"(1) image size before resize = %@",NSStringFromCGSize(image.size));
NSData *imageData = UIImageJPEGRepresentation(self, 1.0);
LOG(@"(2) image data length = %ul",imageData.length);
//create our blurred image
CIContext *context = [CIContext contextWithOptions:nil];
CIImage *inputImage = [CIImage imageWithCGImage:image.CGImage];
//setting up Gaussian Blur (we could use one of many filters offered by Core Image)
CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
[filter setValue:inputImage forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:radius] forKey:@"inputRadius"];
CIImage *result = [filter valueForKey:kCIOutputImageKey];
//CIGaussianBlur has a tendency to shrink the image a little, this ensures it matches up exactly to the bounds of our original image
CGImageRef cgImage = [context createCGImage:result fromRect:[inputImage extent]];
UIImage *finalImage = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
LOG(@"(3) final image size after resize = %@",NSStringFromCGSize(finalImage.size));
return finalImage;
}
过滤前 )
过滤后
答案 0 :(得分:2)
实际上,我刚遇到了这个问题,并找到了与@RizwanSattar所描述的解决方案不同的解决方案。
基于与Apple开发板上的“Rincewind”的交换,我所做的是首先在图像上应用CIAffineClamp,将变换值设置为identity。这会以相同的比例创建图像,但具有无限的范围。这会导致模糊正确模糊边缘。
然后在应用模糊后,我将图像裁剪为原始范围,裁剪出边缘上的羽化。
您可以在我在github上发布的CI过滤器演示应用中看到代码:
CIFilter demo project on github
这是一个处理所有不同CI过滤器的通用程序,但它有代码来处理高斯模糊过滤器。
查看方法showImage
。它具有特殊情况代码,用于在应用模糊滤镜之前在源图像上设置范围:
if ([currentFilterName isEqualToString: @"CIGaussianBlur"])
{
// NSLog(@"new image is bigger");
CIFilter *clampFilter = [self clampFilter];
CIImage *sourceCIImage = [CIImage imageWithCGImage: imageToEdit.CGImage];
[clampFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
[clampFilter setValue:[NSValue valueWithBytes: &CGAffineTransformIdentity
objCType:@encode(CGAffineTransform)]
forKey:@"inputTransform"];
sourceCIImage = [clampFilter valueForKey: kCIOutputImageKey];
[currentFilter setValue: sourceCIImage
forKey: kCIInputImageKey];
}
(方法“clampFilter”只是懒洋洋地加载CIAffineClamp
过滤器。)
然后我应用用户选择的过滤器:
outputImage = [currentFilter valueForKey: kCIOutputImageKey];
然后在应用选定的滤镜后,我会检查生成的图像的范围,如果它更大,则将其裁剪回原始范围:
CGSize newSize;
newSize = outputImage.extent.size;
if (newSize.width > sourceImageExtent.width || newSize.height > sourceImageExtent.height)
{
// NSLog(@"new image is bigger");
CIFilter *cropFilter = [self cropFilter]; //Lazily load a CIAffineClamp filter
CGRect boundsRect = CGRectMake(0, 0, sourceImageExtent.width, sourceImageExtent.height);
[cropFilter setValue:outputImage forKey: @"inputImage"];
CIVector *rectVector = [CIVector vectorWithCGRect: boundsRect];
[cropFilter setValue: rectVector
forKey: @"inputRectangle"];
outputImage = [cropFilter valueForKey: kCIOutputImageKey];
}
答案 1 :(得分:0)
您在模糊图像中看到这些“白色条带”的原因是,生成的CIImage比原始图像更大,因为它具有模糊的模糊边缘。当您将生成的图像硬绘制为与原始图像相同的大小时,它不会考虑模糊边缘。
后:
CIImage *result = [filter valueForKey:kCIOutputImageKey];
查看result.extent
这是一个CGRect,它显示相对于原始图像的新边界框。 (即对于正半径,result.extent.origin.y
将为负)
这是一些代码(你应该真的测试它):
CIImage *result = blurFilter.outputImage;
// Blur filter will create a larger image to cover the "fuzz", but
// we should cut it out since goes to transparent and it looks like a
// vignette
CGFloat imageSizeDifference = -result.extent.origin.x;
// NOTE: on iOS7 it seems to generate an image that will end up still vignetting, so
// as a hack just multiply the vertical inset by 2x
CGRect imageInset = CGRectInset(result.extent, imageSizeDifference, imageSizeDifference*2);
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:result fromRect:imageInset];
希望有所帮助。