如何在捕获会话中将CIFilter与CIPerspectiveCorrection一起使用?

时间:2019-01-28 15:45:15

标签: swift core-image

我想扫描文档并解决手机的任何透视问题,例如App可以做的笔记。 没关系,直到我要使用为止 CIFilter(name: "CIPerspectiveCorrection"),然后我弄乱了图像,并努力了解我要去哪里出了问题。

我试图切换参数和其他滤镜或旋转图像,但这对我不起作用。

这是我设置的一个小项目,用于测试所有这些内容: https://github.com/iViktor/scanner

基本上,我在VNDetectRectanglesRequest上运行AVCaptureSession并保存在private var targetRectangle = VNRectangleObservation()中得到的矩形

我正在使用它来重新计算图像中的点并在图像上运行过滤器。

extension DocumentScannerViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
    guard let imageData = photo.fileDataRepresentation()
        else { return }
    guard let ciImage = CIImage(data: imageData, options: [.applyOrientationProperty : true]) else { return }
    let image = UIImage(ciImage: ciImage)

    let imageTopLeft: CGPoint = CGPoint(x: image.size.width * targetRectangle.bottomLeft.x, y: targetRectangle.bottomLeft.y * image.size.height)
    let imageTopRight: CGPoint = CGPoint(x: image.size.width * targetRectangle.bottomRight.x, y: targetRectangle.bottomRight.y * image.size.height)
    let imageBottomLeft: CGPoint = CGPoint(x: image.size.width * targetRectangle.topLeft.x, y: targetRectangle.topLeft.y * image.size.height)
    let imageBottomRight: CGPoint = CGPoint(x: image.size.width * targetRectangle.topRight.x, y: targetRectangle.topRight.y * image.size.height)

    let flattenedImage = image.flattenImage(topLeft: imageTopLeft, topRight: imageTopRight, bottomLeft: imageBottomLeft, bottomRight: imageBottomRight)
    let finalImage = UIImage(ciImage: flattenedImage, scale: image.scale, orientation: image.imageOrientation)

//performSegue(withIdentifier: "showPhoto", sender: image)
//performSegue(withIdentifier: "showPhoto", sender: UIImage(ciImage: flattenedImage))
    performSegue(withIdentifier: "showPhoto", sender: finalImage)

}
}

这是无效的代码,我正在努力:

extension UIImage {

func flattenImage(topLeft: CGPoint, topRight: CGPoint, bottomLeft: CGPoint, bottomRight: CGPoint) -> CIImage {
    let docImage = self.ciImage!
    let rect = CGRect(origin: CGPoint.zero, size: self.size)
    let perspectiveCorrection = CIFilter(name: "CIPerspectiveCorrection")!
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: topLeft, extent: rect)), forKey: "inputTopLeft")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: topRight, extent: rect)), forKey: "inputTopRight")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: bottomLeft, extent: rect)), forKey: "inputBottomLeft")
    perspectiveCorrection.setValue(CIVector(cgPoint: self.cartesianForPoint(point: bottomRight, extent: rect)), forKey: "inputBottomRight")
    perspectiveCorrection.setValue(docImage, forKey: kCIInputImageKey)

    return perspectiveCorrection.outputImage!
}

func cartesianForPoint(point:CGPoint,extent:CGRect) -> CGPoint {
    return CGPoint(x: point.x,y: extent.height - point.y)
}
}

因此,最后我想扫描文档(如发票)并自动修复任何用户错误(如透视图问题)。现在,我添加到图像中的滤镜会产生奇怪的手扇状效果。

1 个答案:

答案 0 :(得分:1)

基于这些注释,我更新了代码,在其中我使用targetRectangle代替了使用绘制路径表示的点,并更改了我在图像上使用它们的位置,因为CI使用了不同的坐标系并且图片被镜像了。

我更新了

    private func startScanner() {
         ... ... ...
               let request = VNDetectRectanglesRequest { req, error in
                    DispatchQueue.main.async {
                        if let observation = req.results?.first as? VNRectangleObservation {
                            let points = self.targetRectLayer.drawTargetRect(observation: observation, previewLayer: self.previewLayer, animated: false)
                            let size = self.scannerView.frame.size
                            self.trackedTopLeftPoint = CGPoint(x: points.topLeft.x / size.width, y: points.topLeft.y / size.height )
                            self.trackedTopRightPoint = CGPoint(x: points.topRight.x / size.width, y: points.topRight.y / size.height )
                            self.trackedBottomLeftPoint = CGPoint(x: points.bottomLeft.x / size.width, y: points.bottomLeft.y / size.height )
                            self.trackedBottomRightPoint = CGPoint(x: points.bottomRight.x / size.width, y: points.bottomRight.y / size.height )
                        } else {
                            _ = self.targetRectLayer.drawTargetRect(observation: nil, previewLayer: self.previewLayer, animated: false)
                        }
                    }
                }
        }

extension DocumentScannerViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
    guard let imageData = photo.fileDataRepresentation()
        else { return }
    guard let ciImage = CIImage(data: imageData, options: [.applyOrientationProperty : true]) else { return }
    let image = UIImage(ciImage: ciImage)

    // CoreImage is working with cartesian coordinates, basically y:0 is in the bottom left corner
    let imageTopLeft: CGPoint = CGPoint(x: image.size.width * trackedBottomLeftPoint.x, y: trackedBottomLeftPoint.y * image.size.height)
    let imageTopRight: CGPoint = CGPoint(x: image.size.width * trackedTopLeftPoint.x, y: trackedTopLeftPoint.y * image.size.height)
    let imageBottomLeft: CGPoint = CGPoint(x: image.size.width * trackedBottomRightPoint.x, y: trackedBottomRightPoint.y * image.size.height)
    let imageBottomRight: CGPoint = CGPoint(x: image.size.width * trackedTopRightPoint.x, y: trackedTopRightPoint.y * image.size.height)

    let flattenedImage = image.flattenImage(topLeft: imageTopLeft, topRight: imageTopRight, bottomLeft: imageBottomLeft, bottomRight: imageBottomRight)
    let newCGImage = CIContext(options: nil).createCGImage(flattenedImage, from: flattenedImage.extent)
    let doneCroppedImage = UIImage(cgImage: newCGImage!, scale: image.scale, orientation: image.imageOrientation)
    performSegue(withIdentifier: "showPhoto", sender: doneCroppedImage)
}
}

解决了这个问题。