如何在CIFilter中使用景深?

时间:2016-12-29 04:25:55

标签: ios objective-c image-processing cifilter

我正在尝试在CIFilter中实施景深 在Apple Developer的核心图像过滤器参考中,它说的是inputPoint0inputPoint1

  

图像的聚焦区域在图像的inputPoint0inputPoint1之间延伸。

所以我将UI工具包坐标系更改为Core Image坐标系并设置这些点。但输出始终显示在所有模糊图像所有聚焦图像中。没有深度。

有人可以给我一个片段代码来实现 CIFilter景深吗?

并给我一些关于该代码的解释? 特别与inputPoint

相关

以下是我正在处理的代码。

景深法

@implementation CIImage (myCustomExtension)    
+(CIImage *) CIFilterDepthOfField:(CIImage *)inputImage inputPoint0:(CIVector *) inputPoint0 inputPoint1:(CIVector *) inputPoint1 inputSaturation:(CGFloat) inputSaturation inputUnsharpMaskRadius:(CGFloat)inputUnsharpMaskRadius inputUnsharpMaskIntensity:(CGFloat) inputUnsharpMaskIntensity inputRadius:(CGFloat) inputRadius{
    CIFilter *depthOfField = [CIFilter filterWithName:@"CIDepthOfField" withInputParameters:@{kCIInputImageKey:inputImage, @"inputPoint0":inputPoint0, @"inputPoint1":inputPoint1, kCIInputSaturationKey:[NSNumber numberWithFloat:inputSaturation], @"inputUnsharpMaskRadius":[NSNumber numberWithFloat:inputUnsharpMaskRadius], @"inputUnsharpMaskIntensity":[NSNumber numberWithFloat:inputUnsharpMaskIntensity], kCIInputRadiusKey:[NSNumber numberWithFloat:inputRadius]}];

    return [depthOfField outputImage];
}
@end

我在自定义类中的实现

//Depth Of Field
CIVector *point0 = [CIVector vectorWithX:self.point.x Y:self.point.y];
CIVector *point1 = [CIVector vectorWithX:self.point.x Y:self.point.y+ 300];
outputCIImage = [CIImage CIFilterDepthOfField:outputCIImage inputPoint0:point0 inputPoint1:point1 inputSaturation:1.5 inputUnsharpMaskRadius:0.5 inputUnsharpMaskIntensity:2.5 inputRadius:6];
UIImage *outputUIImage = [UIImage renderCIImageToUIImage:outputCIImage withCIContext:self.ciContext];

self.imageView.image = outputUIImage;

点来自点击识别器

- (IBAction)touchPointSet:(UITapGestureRecognizer *)sender {
    //Get point in imageview
    CGPoint point = [sender locationInView:self.imageView];
    //Calculate image scale for image view image
    CGFloat imageScale = fminf(self.imageView.frame.size.width/self.imageView.image.size.width, self.imageView.frame.size.height/self.imageView.image.size.height);
    //Calculate point in image bound
    CGPoint pointInImage = CGPointMake(point.x - ((self.imageView.frame.size.width - self.imageView.image.size.width * imageScale) / 2), point.y - ((self.imageView.frame.size.height - self.imageView.image.size.height * imageScale) / 2));

    //Make transform
    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    transform = CGAffineTransformTranslate(transform, 0, -self.imageView.image.size.height * imageScale);

    //Transform point to Core image coordinate
    CGPoint transformedPoint = CGPointApplyAffineTransform(pointInImage, transform);

    CGFloat screenScale = [UIScreen mainScreen].scale;
    self.point = CGPointMake(transformedPoint.x * screenScale, transformedPoint.y * screenScale);
}

这是渲染方法CIImage To UIImage

@implementation UIImage (myCustomExtension)
+(UIImage *) renderCIImageToUIImage:(CIImage *) CIImage withCIContext:(CIContext *) CIContext{
    CGImageRef CGImageFromCIImage = [CIContext createCGImage:CIImage fromRect:CIImage.extent];
    UIImage *UIImageFromCGImage = [UIImage imageWithCGImage:CGImageFromCIImage];
    CGImageRelease(CGImageFromCIImage);

    return UIImageFromCGImage;
}
@end

0 个答案:

没有答案