如何使用在Swift中返回MultiArray(Double)的模型进行预测

时间:2019-12-30 21:11:30

标签: swift coreml

我用Keras训练了一个模型,并使用CoreMLTools将其转换为CoreML。您可以在此处查看有关该模型的详细信息:

enter image description here

如何使用此模型进行预测?当我尝试时出现此错误:

2019-12-30 13:07:01.564792-0800 agricultural-helper[16042:6014777] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid argument": generic_reshape_kernel: Invalid bottom shape (512 28 -3 1 1) for reshape to (512 -1 1 1 1) status=-6
2019-12-30 13:07:01.565447-0800 agricultural-helper[16042:6014777] [coreml] Error computing NN outputs -6
Error Domain=com.apple.CoreML Code=0 "Error computing NN outputs." UserInfo={NSLocalizedDescription=Error computing NN outputs.}

这是我的代码:

override init() {
    super.init()
    let options = MLPredictionOptions()
    options.usesCPUOnly = true
    let model = CropDisease()
    let uiImage = UIImage(named: "test.png")!
    let pixelBuffer = buffer(from: uiImage)!
    let modelInput = CropDiseaseInput(conv2d_input: pixelBuffer)

    do {
        let output = try model.prediction(input: modelInput, options: options)
        print(output)
    } catch {
        print(error)
    }
}

func buffer(from image: UIImage) -> CVPixelBuffer? {
  let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
  var pixelBuffer : CVPixelBuffer?
  let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)
  guard (status == kCVReturnSuccess) else {
    return nil
  }

  CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
  let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

  let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
  let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

  context?.translateBy(x: 0, y: image.size.height)
  context?.scaleBy(x: 1.0, y: -1.0)

  UIGraphicsPushContext(context!)
  image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
  UIGraphicsPopContext()
  CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

  return pixelBuffer
}

任何帮助。谢谢!

1 个答案:

答案 0 :(得分:0)

您的Core ML模型似乎有内部问题。尝试使用coremltools从Python进行预测。我的猜测是您会收到相同的错误消息。

如果确实是这样,那么解决方案是找出问题在模型中的确切位置。错误消息已经给出了很大的提示:

override func prepare(for segue: UIStoryboardSegue, sender: Any?) {  
     let vc = segue.destination as! MapViewController
     vc.pinAddress = addressLabel.text!  
}  
@IBAction func goToMap(_ sender: Any? )  {
    self.performSegue(withIdentifier:"segue",sender:nil)
} 

某处有一层正在获得张量的形状(512,28,-3,1,1,1)但期望(512,-1,1,1,1)的层。请注意,-1不一定是问题(通常意味着“自动计算该尺寸的大小”),但-3看起来有问题...