我正在尝试使用VNCoreMLRequest
方法AVCapturePhotoCaptureDelegate
(photo.pixelBuffer)的模型和cvPixelBuffer运行didFinishProcessingPhoto
。然后,我将这个pixelBuffer传递给VNImageRequestHandler
并在以下请求上执行它:
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: .up, options: [:])
do {
try handler.perform([self.coreMLRequest()])
} catch {
return
}
}
private func coreMLRequest() -> VNRequest {
guard let model = model else {
fatalError()
}
let request = VNCoreMLRequest(model: model) { (req, err) in
if let error = err {
print("error: \(error)")
}
if let observations = req.results as? [VNClassificationObservation] {
print("observations: \(observations.count)")
}
}
request.imageCropAndScaleOption = .centerCrop
return request
}
VNCoreMLRequest
内部的错误:
Error Domain=com.apple.vis Code=3 "Failed to transfer inBuffer to croppedBuffer. Error -12905"
UserInfo={NSLocalizedDescription=Failed to transfer inBuffer to croppedBuffer. Error -12905}