我正在关注this WWDC讲座。
在讲座中,他提到了一个名为“CIEdgePreserveUpsampleFilter”的过滤器,它使边缘更加保留和上采样。 我试图在我的CIImage上应用它,我得到一个未初始化的图像和崩溃的结果。
这是我正在使用的代码以及我如何尝试应用过滤器的示例(这显然是错误的)。我只是找不到应用此过滤器的任何相关说明,我只知道我希望其结果在我的图像上。 我在我尝试应用过滤器的位置旁边发表评论,以及当我这样做时会发生什么。
func createMask(for depthImage: CIImage, withFocus focus: CGFloat, andScale scale: CGFloat, andSlope slope: CGFloat = 4.0, andWidth width: CGFloat = 0.1) -> CIImage {
let s1 = slope
let s2 = -slope
let filterWidth = 2 / slope + width
let b1 = -s1 * (focus - filterWidth / 2)
let b2 = -s2 * (focus + filterWidth / 2)
let mask0 = depthImage
.applyingFilter("CIColorMatrix", withInputParameters: [
"inputRVector": CIVector(x: s1, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s1, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s1, w: 0),
"inputBiasVector": CIVector(x: b1, y: b1, z: b1, w: 0)])
.applyingFilter("CIColorClamp").applyingFilter("CIEdgePreserveUpsampleFilter") //returns uninitialized image
let mask1 = depthImage
.applyingFilter("CIColorMatrix", withInputParameters: [
"inputRVector": CIVector(x: s2, y: 0, z: 0, w: 0),
"inputGVector": CIVector(x: 0, y: s2, z: 0, w: 0),
"inputBVector": CIVector(x: 0, y: 0, z: s2, w: 0),
"inputBiasVector": CIVector(x: b2, y: b2, z: b2, w: 0)])
.applyingFilter("CIColorClamp")
var combinedMask = mask0.applyingFilter("CIEdgePreserveUpsampleFilter", withInputParameters: ["inputBackgroundImage" : mask1]) //complete crash
if PortraitModel.sharedInstance.filterArea == .front {
combinedMask = combinedMask.applyingFilter("CIColorInvert")
}
let mask = combinedMask.applyingFilter("CIBicubicScaleTransform", withInputParameters: [kCIInputScaleKey: scale])
return mask
}
答案 0 :(得分:1)
我发现的运行时标头和一些使用代码似乎表明CIEdgePreserveUpsampleFilter不采用inputBackgroundImage
参数,而是inputSmallImage
。
请参阅https://gist.github.com/HarshilShah/ca0e18db01ce250fd308ab5acc99a9d0