如何在Swift中使用CoreML创建不带实时输入的简单相机应用程序?

时间:2019-04-07 19:41:44

标签: ios swift camera image-recognition coreml

我一直在尝试使用xcode快速创建一个简单的相机图像识别应用程序,以允许用户拍照。然后将照片输入到已经训练好的coreML模型中,并将具有预测精度的输出输出到标签。

我搜索了多个网站,而我只能找到

之类的教程。

https://medium.freecodecamp.org/ios-coreml-vision-image-recognition-3619cf319d0b

允许实时识别图像。我不希望它是实时的,而只是允许某人拍照。我想知道如何以非实时输入的方式转换此代码:

  import UIKit
  import AVFoundation
  import Vision

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let label: UILabel = {
    let label = UILabel()
    label.textColor = .white
    label.translatesAutoresizingMaskIntoConstraints = false
    label.text = "Label"
    label.font = label.font.withSize(30)
    return label
}()
override func viewDidLoad() {

    super.viewDidLoad()

    // establish the capture session and add the label
    setupCaptureSession()
    view.addSubview(label)
    setupLabel()
    // Do any additional setup after loading the view, typically from a nib.
}
func setupCaptureSession() {
    // create a new capture session
    let captureSession = AVCaptureSession()

    // find the available cameras
    let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

    do {
        // select a camera
        if let captureDevice = availableDevices.first {
            captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice))
        }
    } catch {
        // print an error if the camera is not available
        print(error.localizedDescription)
    }

    // setup the video output to the screen and add output to our capture session
    let captureOutput = AVCaptureVideoDataOutput()
    captureSession.addOutput(captureOutput)
    let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    previewLayer.frame = view.frame
    view.layer.addSublayer(previewLayer)

    // buffer the video and start the capture session
    captureOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
    captureSession.startRunning()
}

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    // load our CoreML Pokedex model
    guard let model = try? VNCoreMLModel(for: aslModel().model) else { return }

    // run an inference with CoreML
    let request = VNCoreMLRequest(model: model) { (finishedRequest, error) in

        // grab the inference results
        guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }

        // grab the highest confidence result
        guard let Observation = results.first else { return }

        // create the label text components
        let predclass = "\(Observation.identifier)"
        let predconfidence = String(format: "%.02f%", Observation.confidence * 100)

        // set the label text
        DispatchQueue.main.async(execute: {
            self.label.text = "\(predclass) \(predconfidence)"
        })
    }


    // create a Core Video pixel buffer which is an image buffer that holds pixels in main memory
    // Applications generating frames, compressing or decompressing video, or using Core Image
    // can all make use of Core Video pixel buffers
    guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

    // execute the request
    try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
func setupLabel() {
    // constrain the label in the center
    label.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true

    // constrain the the label to 50 pixels from the bottom
    label.bottomAnchor.constraint(equalTo: view.bottomAnchor, constant: -50).isActive = true
}

override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()
    // Dispose of any resources that can be recreated.
}

}

现在就像在输入实时图像输入之前所述。

1 个答案:

答案 0 :(得分:0)

我在Medium上写了一篇文章,但它是葡萄牙语。看看自动翻译是否可以让您理解这篇文章。

Swift + Core ML

希望这对您有所帮助。