如何使用AVDepthData

时间:2018-03-22 10:06:46

标签: swift avfoundation

我试图模仿Apple在他们的相机应用程序中创建的肖像模式。目前我得到一个工作结果,但不满意,因为我试图过滤掉的对象有过滤器进入边缘而不是围绕它的精确线(图像附加)。

我使用this教程进行实施。 您可以通过查看这些图像轻松查看我的结果与Apple结果之间的差异。

我真的只是从我链接的教程中获取了确切的代码,但我将添加这些重要的片段:

//为过滤器创建掩码:

func createMask(for depthImage: CIImage, withFocus focus: CGFloat = 0.5, andScale scale: CGFloat, andSlope slope: CGFloat = 4.0, andWidth width: CGFloat = 0.1) -> CIImage {

    let s1 = slope
    let s2 = -slope
    let filterWidth =  2 / slope + width
    let b1 = -s1 * (focus - filterWidth / 2)
    let b2 = -s2 * (focus + filterWidth / 2)

    let mask0 = depthImage
        .applyingFilter("CIColorMatrix", withInputParameters: [
            "inputRVector": CIVector(x: s1, y: 0, z: 0, w: 0),
            "inputGVector": CIVector(x: 0, y: s1, z: 0, w: 0),
            "inputBVector": CIVector(x: 0, y: 0, z: s1, w: 0),
            "inputBiasVector": CIVector(x: b1, y: b1, z: b1, w: 0)])
        .applyingFilter("CIColorClamp")

    let mask1 = depthImage
        .applyingFilter("CIColorMatrix", withInputParameters: [
            "inputRVector": CIVector(x: s2, y: 0, z: 0, w: 0),
            "inputGVector": CIVector(x: 0, y: s2, z: 0, w: 0),
            "inputBVector": CIVector(x: 0, y: 0, z: s2, w: 0),
            "inputBiasVector": CIVector(x: b2, y: b2, z: b2, w: 0)])
        .applyingFilter("CIColorClamp")

    var combinedMask = mask0.applyingFilter("CIDarkenBlendMode", withInputParameters: ["inputBackgroundImage" : mask1])
    let mask = combinedMask.applyingFilter("CIBicubicScaleTransform", withInputParameters: [kCIInputScaleKey: scale])

    return mask
}

//根据掩码应用过滤器:

func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up, blurRadius: CGFloat) -> UIImage? {

    let invertedMask = mask.applyingFilter("CIColorInvert")

    let output = image.applyingFilter("CIMaskedVariableBlur", withInputParameters: ["inputMask" : invertedMask,
                                                                                    "inputRadius": blurRadius])

    guard let cgImage = context.createCGImage(output, from: image.extent) else {
        return nil
    }

    return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}

This image is my result This is Apples camera app result

此时我只想了解改善此结果的下一步是什么。也许在深度图上有一些额外的工作?也许有更平滑的方法根据深度图应用滤镜?

0 个答案:

没有答案