AVFoundation camera image quality degraded upon processing

时间:2016-02-12 19:55:01

标签: ios swift camera avfoundation

I made an AVFoundation camera to crop square images based on @fsaint answer: Cropping AVCaptureVideoPreviewLayer output to a square . The sizing of the photo is great, that works perfectly. However the image quality is noticeably degraded (See Below: first image is preview layer showing good resolution, second is the degraded image that was captured). It definitely has to do with what happens in processImage: as the image resolution is fine without it, just not the right aspect ratio. The documentation on image processing is pretty bare, any insights are GREATLY appreciated!!

Setting up camera:

func setUpCamera() {

      captureSession = AVCaptureSession()
      captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
      let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
      if ((backCamera?.hasFlash) != nil) {
         do { 
            try backCamera.lockForConfiguration()
            backCamera.flashMode = AVCaptureFlashMode.Auto
            backCamera.unlockForConfiguration()
         } catch {
            // error handling
         }
      }
      var error: NSError?
      var input: AVCaptureDeviceInput!
      do {
         input = try AVCaptureDeviceInput(device: backCamera)
      } catch let error1 as NSError {
         error = error1
         input = nil
      }
      if error == nil && captureSession!.canAddInput(input) {
         captureSession!.addInput(input)
         stillImageOutput = AVCaptureStillImageOutput()
         stillImageOutput!.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
         if captureSession!.canAddOutput(stillImageOutput) {
            captureSession!.sessionPreset = AVCaptureSessionPresetHigh;
            captureSession!.addOutput(stillImageOutput)
            previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
            previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
            previewLayer!.connection?.videoOrientation = AVCaptureVideoOrientation.Portrait
            previewVideoView.layer.addSublayer(previewLayer!)
            captureSession!.startRunning()
         }
      }
   }

Snapping photo:

@IBAction func onSnapPhotoButtonPressed(sender: UIButton) {

      if let videoConnection = self.stillImageOutput!.connectionWithMediaType(AVMediaTypeVideo) {
         videoConnection.videoOrientation = AVCaptureVideoOrientation.Portrait
self.stillImageOutput?.captureStillImageAsynchronouslyFromConnection(videoConnection, completionHandler: {(sampleBuffer, error) in

            if (sampleBuffer != nil) {

               let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(sampleBuffer)
               let dataProvider = CGDataProviderCreateWithCFData(imageData)
               let cgImageRef = CGImageCreateWithJPEGDataProvider(dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)

               let image = UIImage(CGImage: cgImageRef!, scale: 1.0, orientation: UIImageOrientation.Right)   
               self.processImage(image)
               self.clearPhotoButton.hidden = false
               self.nextButton.hidden = false
               self.view.bringSubviewToFront(self.imageView)
            }
         })
      }
   }

Process image to square:

  func processImage(image:UIImage) {

      let deviceScreen = previewLayer?.bounds
      let width:CGFloat = (deviceScreen?.size.width)!
      UIGraphicsBeginImageContext(CGSizeMake(width, width))
      let aspectRatio:CGFloat = image.size.height * width / image.size.width
      image.drawInRect(CGRectMake(0, -(aspectRatio - width) / 2.0, width, aspectRatio))
      let smallImage = UIGraphicsGetImageFromCurrentImageContext()
      UIGraphicsEndImageContext()
      let cropRect = CGRectMake(0, 0, width, width)
      let imageRef:CGImageRef = CGImageCreateWithImageInRect(smallImage.CGImage, cropRect)!
      imageView.image = UIImage(CGImage: imageRef)
   }

previewLater

processed image

1 个答案:

答案 0 :(得分:1)

There are a few things wrong with your String callString = "{ call SET @box = 'Polygon(( ? ?, ? ?, ? ?, ? ?, ? ?))'; SELECT ItemID FROM ItemPoint WHERE MBRContains(GeomFromText(@box), Coords); }"; CallableStatement callableStatement = con.prepareCall(callString); callableStatement.setInt(1, lx); callableStatement.setInt(2, ry); callableStatement.setInt(3, rx); callableStatement.setInt(4, ry); callableStatement.setInt(5, lx); callableStatement.setInt(6, ly); callableStatement.setInt(7, rx); callableStatement.setInt(8, ly); callableStatement.setInt(9, lx); callableStatement.setInt(10, ry); ResultSet regionResult = callableStatement.executeQuery(); function.

First of all, you're creating a new graphics context with processImage().

According to the Apple docs on this function:

This function is equivalent to calling the UIGraphicsBeginImageContextWithOptions function with the opaque parameter set to NO and a scale factor of 1.0.

Because the scale factor is UIGraphicsBeginImageContext(), it is going to look pixelated when displayed on-screen, as the screen's resolution is (most likely) higher.

You want to be using the 1.0 function, and pass UIGraphicsBeginImageContextWithOptions() for the scale factor. According to the docs on this function, for the 0.0 argument:

If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.

For example:

scale

Your output should now look nice and crisp, as it is being rendered with the same scale as the screen.


Second of all, there's a problem with the width you're passing in.

UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, width), NO, 0.0);

You shouldn't be passing in the width of the screen here, it should be the width of the image. For example:

let width:CGFloat = (deviceScreen?.size.width)!
UIGraphicsBeginImageContext(CGSizeMake(width, width))

You will then have to change the let width:CGFloat = image.size.width variable to take the screen width, such as:

aspectRatio

Third of all, you can simplify your cropping function significantly.

let aspectRatio:CGFloat = image.size.height * (deviceScreen?.size.width)! / image.size.width

There's no need to draw the image twice, you only need to just translate the context up, and then draw the image.