在Swift中调用多个请求

时间:2018-02-25 23:28:53

标签: ios swift

使用CoreML并尝试使用相机作为图像识别的源来执行两个模型。但是,我似乎无法允许VNCoreMLRequest在一个模型上运行两个模型。关于如何在此请求上运行两个模型的任何建议?

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    var fitness_identifer = ""
    var fitness_confidence = 0

    guard let model_one = try? VNCoreMLModel(for: imagenet_ut().model) else { return }
    guard let model_two = try? VNCoreMLModel(for: ut_legs2().model) else { return }

    let request = VNCoreMLRequest(model: [model_one, model_two]) { (finishedRequest, error) in
        guard let results = finishedRequest.results as? [VNClassificationObservation] else { return }
        guard let Observation = results.first else { return }

        DispatchQueue.main.async(execute: {
            fitness_identifer = Observation.identifier
            fitness_confidence = Int(Observation.confidence * 100)

            self.label.text = "\(Int(fitness_confidence))% it's a \(fitness_identifer)"
        })
    }

    guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

    // executes request
    try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}

当我尝试将两个模型添加为数组时(当我只有它工作的那个模型时),这是错误:

Contextual type 'VNCoreMLModel' cannot be used with array literal

1 个答案:

答案 0 :(得分:0)

为什么不在AsyncGroup中运行两个单独的请求:

let request1 = VNCoreMLRequest(model: model_one) { (finishedRequest, error) in
    //...
}

let request2 = VNCoreMLRequest(model: model_two) { (finishedRequest, error) in
    //...
}

//...
let group = AsyncGroup()
group.background {
    // Run on background queue
    try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request1])
}
group.background {
    // Run on background queue
    try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request2])
}
group.wait()
// Both operations completed here