从VNCoreMLFeatureValueObservation跟踪对象

时间:2019-06-27 05:42:51

标签: ios swift coreml vision

我正在使用视觉和CoreML模型,该模型的作用是预测而不是分类或图像到图像处理。我输入了pixelBuffer,然后返回了[VNCoreMLFeatureValueObservation] 1

func createCoreMLRequest() -> VNCoreMLRequest? {
    guard let model = try? VNCoreMLModel(for: yolo.model.model) else { return nil }
    let request = VNCoreMLRequest(model: model) { (finReq, err) in
        if let results = finReq.results as? [VNCoreMLFeatureValueObservation] {
            let features = results.compactMap({ (obs) -> MLMultiArray? in
                obs.featureValue.multiArrayValue
            })
            let boundingBoxes = self.yolo.computeBoundingBoxes(features: features)
            guard let prediction = boundingBoxes.first else { return }
            self.observation = VNDetectedObjectObservation(boundingBox: prediction.rect)
            DispatchQueue.main.async {
                self.highlightView.frame = prediction.rect
            }
        }
    }
    return request
}

然后我从观察值中获取[MLMultiArray] 2并将其输入到YOLO.swift以计算边界框。边界框是一组预测。预测是一个包含rect(boundingBox),className和confidence的结构。

使用我要跟踪对象的预测。因此,我使用Vision并使用预测的boundingBox创建一个[VNDetectedObjectObservation] 3。然后,我创建一个[VNTrackObjectRequest] 4并传递检测到的观测值,但结果观测值始终返回与初始预测相同的boundingBox。

func createTrackRequest() -> VNTrackObjectRequest {
    let trackRequest = VNTrackObjectRequest(detectedObjectObservation: self.observation) { (finReq, err) in
         if let results = finReq.results as? [VNDetectedObjectObservation] {
            if let observation = results.first {
                self.observation = observation
                DispatchQueue.main.async {
                    self.highlightView.frame = observation.boundingBox
                }
            }
        }
    }
    return trackRequest
}

我不确定为什么会这样。有什么建议吗?

0 个答案:

没有答案