SWIFT 3:使用AVCapturePhotoOutput拍摄照片(需要另一组眼睛来查看代码,为什么这不起作用?)

时间:2017-03-16 15:02:37

标签: ios swift camera avcapturesession avcapturedevice

我有一个自定义相机,其中添加了AVCapturePhotoCaptureDelegate,以及以下代码来捕捉静止图像:

Outlets,Variables和Constants

@IBOutlet weak var cameraPreview: UIView!
@IBOutlet wear var takePhotoPreview: UIImageView!

private var cameraView: AVCaptureVideoPreviewLayer!
private var camera: AVCaptureDevice!
private var cameraInput: AVCaptureDeviceInput!
private var cameraOutput: AVCapturePhotoOutput!
private var photoSampleBuffer: CMSampleBuffer?
private var previewPhotoSampleBuffer: CMSampleBuffer?
private var photoData: Data? = nil

private let cameraSession = AVCaptureSession()
private photoOutput = AVCapturePhotoOutput()

设置相机会话

private func createCamera() {
    cameraSession.beginConfiguration()
    cameraSession.sessionPreset = AVCaptureSessionPresetPhoto
    cameraSession.automaticallyConfiguresCaptureDeviceForWideColor = true

    // Add Camera Input
    if let defaultCamera = AVCaptureDeviceDiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaTypeVideo, position: .back).devices {
        camera = defaultCamera.first
        do {
            let cameraInput = try AVCaptureDeviceInput(device: camera)
            if cameraSession.canAddInput(cameraInput) {
                cameraSession.addInput(cameraInput)
                print("Camera input added to the session")
            }
        } catch { print("Could not add camera input to the camera session") }
    }

    // Add Camera View Input
    if let cameraView = AVCaptureVideoPreviewLayer(session: cameraSession) {
        cameraView.frame = cameraPreview.bounds
        cameraView.videoGravity = AVLayerVideoGravityResizeAspectFill
        cameraView.cornerRadius = 12.0
        cameraPreview.layer.addSublayer(cameraView)
        print("Camera view created for the camera session")
    } else { print("Could not create camera preview") }

    // Add Photo Output
    let cameraPhotoOutput = AVCapturePhotoOutput()
    if cameraSession.canAddOutput(cameraPhotoOutput) {
        cameraSession.addOutput(cameraPhotoOutput)
        cameraPhotoOutput.isHighResolutionCaptureEnabled = true
        print("Camera output added to the camera session")
    } else {
        print("Could not add camera photo output to the camera session")
        cameraSession.commitConfiguration()
        return
    }

    cameraSession.commitConfiguration()

    cameraSession.startRunning()
}

CaptureButton

@IBOutlet weak var cameraShutter: UIButton!
@IBAction func cameraShutter(_ sender: UIButton) {
    let photoSettings = AVCapturePhotoSettings()
    photoSettings.flashMode = .on
    photoSettings.isHighResolutionPhotoEnabled = true
    photoSettings.isAutoStillImageStabilizationEnabled = true
    if photoSettings.availablePreviewPhotoPixelFormatTypes.count > 0 {
        photoSettings.previewPhotoFormat = [ kCVPixelBufferPixelFormatTypeKey as String : photoSettings.availablePreviewPhotoPixelFormatTypes.first!]
    }
    cameraPhotoOutput.capturePhoto(with: photoSettings, delegate: self)
}

iOS观察相机功能

func capture(_ captureOutput: AVCapturePhotoOutput, didFinishProcessingPhotoSampleBuffer photoSampleBuffer: CMSampleBuffer?, previewPhotoSampleBuffer: CMSampleBuffer?, resolvedSettings: AVCaptureResolvedPhotoSettings, bracketSettings: AVCaptureBracketedStillImageSettings?, error: Error?) {
    if let photoSampleBuffer = photoSampleBuffer {
        photoData = AVCapturePhotoOutput.jpegPhotoDataRepresentation(forJPEGSampleBuffer: photoSampleBuffer, previewPhotoSampleBuffer: previewPhotoSampleBuffer)
        let photoDataProvider = CGDataProvider(data: photoData as! CFData)
        let cgImagePhotoRef = CGImage(jpegDataProviderSource: photoDataProvider!, decode: nil, shouldInterpolate: true, intent: .absoluteColorimetric)
        let newPhoto = UIImage(cgImage: cgImagePhotoRef!, scale: 1.0, orientation: UIImageOrientation.right)
        self.takePhotoPreview.image = newPhoto
        self.takePhotoPreview.isHidden = false
    }
        else {
        print("Error capturing photo: \(error)")
        return
    }
}

好吧,所以这是交易 - 我在cameraPhotoOutput.capturePhoto(with: photoSettings, delegate: self)放了一个断点,一旦进入该行就收到以下错误信息:

错误消息

致命错误:在展开Optional值时意外发现nil [运行时详细信息]致命错误:在解包可选值时意外发现nil

上面的代码直接来自Apple的示例文档“AVCam”以及来自SO Q& As(linklink。以及其他重复这些答案的其他人的输入。我的最终目标是捕获图像,并立即将图像和用户推送到新的ViewController进行编辑/发布/保存;但是,我现在正在使用UIImageView来确认捕获...这首先不起作用。

那么,这个实现发生了什么?这让我疯了几天。

Swift 3,xCode 8

2 个答案:

答案 0 :(得分:0)

尝试更改

public static boolean m_stopped

volatile

答案 1 :(得分:0)

好吧,想通了。 El Tomato与问题儿童在正确的轨道上,但这不是正确的处方。我的createCamera()函数设置为private,这当然使得内容在其正文之外不可见。因此,当我调用正确的AVCapturePhotoOutput()时,缓冲区提要不存在capturePhoto()调用执行...抛出描述的错误。

所以这意味着这一行:

cameraPhotoOutput.capturePhoto(with: photoSettings, delegate: self)

是正确的,但这只是执行中的错误设置。为确认正确执行,我......

  • 更改了private let photoOutput = AVCapturePhotoOutput()常量
  • private let cameraPhotoOutput = AVCapturePhotoOutput()
  • 并直接在private func createCamera()
  • 中调用该常量

立即完美地执行了图像捕获。

同样替换cameraPhotoOutputAVCapturePhotoOutput()cameraOutputAVCapturePhotoOutput!,尝试了,只是简单地再现了错误。

如果您感兴趣:cgImage创建过程在func capture(_ : capture...函数中保持不变。在它的范围内,我还确定了相机设备的位置,如果前置摄像头改变了图像的方向,并在主队列上将照片发送到ReviewViewController上的var photoContent: UIImage?变量。

希望我的心理错误可以帮助别人: - )