裁剪图像的问题

时间:2017-01-04 18:40:53

标签: ios swift uiimage

我在裁剪图像方面遇到了困难;目前有两件事情我想改进:

1)裁剪后,照片质量会下降。

2)拍摄照片后,方向视图不正确。

总结: 发生的是,裁剪后的照片质量不正确标准,当图像出现在ImageView中时,它旋转90度;为什么会发生这些?我正在尝试根据捕获的流的视图裁剪图像。

以下是图像的裁剪:

  func crop(capture : UIImage) -> UIImage {

        let crop = cameraView.bounds

        //CGRect(x:0,y:0,width:50,height:50)

        let cgImage = capture.cgImage!.cropping(to: crop)

        let image : UIImage = UIImage(cgImage: cgImage!)

        return image
    }

这是我调用作物的地方

func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
        if let photoSampleBuffer = photoSampleBuffer {
            let photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
            var imageTaken = UIImage(data: photoData!)

            //post photo

            let croppedImage = self.crop(capture: imageTaken!)

            imageTaken = croppedImage
            self.imageView.image = imageTaken
           // UIImageWriteToSavedPhotosAlbum(imageTaken!, nil, nil, nil)
        }
    }

&安培;这是全班

import UIKit
import AVFoundation

class CameraVC: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate, AVCapturePhotoCaptureDelegate {

    var captureSession : AVCaptureSession?
    var stillImageOutput : AVCapturePhotoOutput?
    var previewLayer : AVCaptureVideoPreviewLayer?
    @IBOutlet weak var imageView: UIImageView!
    @IBOutlet var cameraView: UIView!

    override func viewDidLoad() {
        super.viewDidLoad()

        // Do any additional setup after loading the view.
    }

    override func viewDidLayoutSubviews() {
        super.viewDidLayoutSubviews()
        previewLayer?.frame = cameraView.bounds
    }

    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)

        captureSession = AVCaptureSession()
        captureSession?.sessionPreset = AVCaptureSessionPreset1920x1080
        stillImageOutput = AVCapturePhotoOutput()

        let backCamera = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)

        do {

            let input = try AVCaptureDeviceInput(device: backCamera)

            if (captureSession?.canAddInput(input))!{

                captureSession?.addInput(input)



                if (captureSession?.canAddOutput(stillImageOutput) != nil){
                    captureSession?.addOutput(stillImageOutput)

                    previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
                    previewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
                    previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
                    cameraView.layer.addSublayer(previewLayer!)
                    captureSession?.startRunning()
                    let captureVideoLayer: AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer.init(session: captureSession!)
                    captureVideoLayer.frame = self.cameraView.bounds
                    captureVideoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
                    self.cameraView.layer.addSublayer(captureVideoLayer)



                }

            }

        } catch {

            print("An error has occured")

        }


    }

    @IBAction func takePhoto(_ sender: UIButton) {
        didPressTakePhoto()
    }

    func didPressTakePhoto(){

        if let videoConnection = stillImageOutput?.connection(withMediaType: AVMediaTypeVideo) {



            videoConnection.videoOrientation = AVCaptureVideoOrientation.portrait

            let settingsForCapture = AVCapturePhotoSettings()

            settingsForCapture.flashMode = .auto
            settingsForCapture.isAutoStillImageStabilizationEnabled = true
            settingsForCapture.isHighResolutionPhotoEnabled = false
            stillImageOutput?.capturePhoto(with: settingsForCapture, delegate: self)

        }


    }


    func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
        if let photoSampleBuffer = photoSampleBuffer {
            let photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
            var imageTaken = UIImage(data: photoData!)

            //post photo

            let croppedImage = self.crop(capture: imageTaken!)

            imageTaken = croppedImage
            self.imageView.image = imageTaken
           // UIImageWriteToSavedPhotosAlbum(imageTaken!, nil, nil, nil)
        }
    }


    func crop(capture : UIImage) -> UIImage {

        let crop = cameraView.bounds

        //CGRect(x:0,y:0,width:50,height:50)

        let cgImage = capture.cgImage!.cropping(to: crop)

        let image : UIImage = UIImage(cgImage: cgImage!)

        return image
    }

}

2 个答案:

答案 0 :(得分:2)

另一种方法是使用名为CIPerspectiveCorrect的Core Image过滤器。因为它使用了CIImage - 这不是一个真实的图像,而是一个"配方"对于图像 - 它不会遭受退化。

基本上,将你的UIImage / CGImage变成CIImage,挑选其中的任意4个点,并裁剪。它不需要是平行四边形(或CGRect),只需要4个点。使用CI过滤器时有两个不同的注意事项:

  • 您使用CIVector而不是使用CGRect。根据滤波器,矢量可以具有2,3,4,甚至更多参数。在这种情况下,您需要4个CIVectors,每个参数有2个参数,对应于左上角(TL),右上角(TR),左下角(BL)和右下角(BR)。

  • CI图像的左下角有原点(X / Y == 0/0),而不是左上角。这基本上意味着您的Y坐标从CG或UI图像上下颠倒。

这是一些示例代码。首先,一些示例声明,包括CI上下文:

let uiTL = CGPoint(x: 50, y: 50)
let uiTR = CGPoint(x: 75, y: 75)
let uiBL = CGPoint(x: 100, y: 300)
let uiBR = CGPoint(x: 25, y: 200)

var ciImage:CIImage!
var ctx:CIContext!

@IBOutlet weak var imageView: UIImageView!

在viewDidLoad中我们设置上下文并从UIImageView获取我们的CIImage:

override func viewDidLoad() {
    super.viewDidLoad()
    ctx = CIContext(options: nil)
    ciImage = CIImage(image: imageView.image!)
}

UIImageViews有一个框架或CGRect,UIImages有一个大小或CGSize。 CIImages有一定程度,基本上就是你的CGSize。但请记住,Y轴是翻转的,并且可以无限延伸! (虽然这不是UIImage源码的主要内容。)这里有一些辅助功能来转换东西:

func createScaledPoint(_ pt:CGPoint) -> CGPoint {
    let x = (pt.x / imageView.frame.width) * ciImage.extent.width
    let y = (pt.y / imageView.frame.height) * ciImage.extent.height
    return CGPoint(x: x, y: y)
}
func createVector(_ point:CGPoint) -> CIVector {
    return CIVector(x: point.x, y: ciImage.extent.height - point.y)
}
func createPoint(_ vector:CGPoint) -> CGPoint {
    return CGPoint(x: vector.x, y: ciImage.extent.height - vector.y)
}

这是对CIPerspectiveCorrection的实际调用。如果我没记错的话,Swift 3的改变就是使用AnyObject。虽然更强类型的变量在以前版本的Swift中有效,但它们现在会导致转储:

func doPerspectiveCorrection(
    _ image:CIImage,
    context:CIContext,
    topLeft:AnyObject,
    topRight:AnyObject,
    bottomRight:AnyObject,
    bottomLeft:AnyObject)
    -> UIImage {
        let filter = CIFilter(name: "CIPerspectiveCorrection")
        filter?.setValue(topLeft, forKey: "inputTopLeft")
        filter?.setValue(topRight, forKey: "inputTopRight")
        filter?.setValue(bottomRight, forKey: "inputBottomRight")
        filter?.setValue(bottomLeft, forKey: "inputBottomLeft")
        filter!.setValue(image, forKey: kCIInputImageKey)
        let cgImage = context.createCGImage((filter?.outputImage)!, from: (filter?.outputImage!.extent)!)
        return UIImage(cgImage: cgImage!)
} 

现在我们有了CIImage,我们创建了四个CIVectors。在这个示例项目中,我对4个CGPoints进行了硬编码,并选择在viewWillLayoutSubviews中创建CIVector,最早的我有UI框架:

override func viewWillLayoutSubviews() {
    let ciTL = createVector(createScaledPoint(uiTL))
    let ciTR = createVector(createScaledPoint(uiTR))
    let ciBR = createVector(createScaledPoint(uiBR))
    let ciBL = createVector(createScaledPoint(uiBL))
    imageView.image = doPerspectiveCorrection(CIImage(image: imageView.image!)!,
                                              context: ctx,
                                              topLeft: ciTL,
                                              topRight: ciTR,
                                              bottomRight: ciBR,
                                              bottomLeft: ciBL)
}

如果您将此代码放入项目中,请将图像加载到UIImageView中,找出您想要的4个CGPoints并运行它,您应该会看到加盖的图像。祝你好运!

答案 1 :(得分:0)

添加到@dfd回答。如果您不想使用精美的核心图像过滤器,请按照此处的工作进行操作。

  • 拍摄后的照片质量很差?

    虽然您使用了最高可能的会话预设AVCaptureSessionPreset1920x1080,但您正在将样本图像缓冲区转换为JPEG格式,然后对其进行裁剪。显然质量会有一些损失。 为了获得高质量的裁剪图像,请尝试会话预设AVCaptureSessionPresetHigh以让AVFoundation决定高质量视频,并dngPhotoDataRepresentation(forRawSampleBuffer:previewPhotoSampleBuffer:)以DNG格式获取图片。

  • 拍摄照片后视图和方向不正确?

didFinishProcessingPhotoSampleBuffer委托给你的样本缓冲区始终向左旋转90度。所以你可能想要自己旋转90度。

使用cgImage和方向为正确初始化你的UIImage并缩放为1.0。

init(cgImage: CGImage, scale: CGFloat, orientation: UIImageOrientation)