Swift:处理用于Firebase自定义TFLite模型的UIImage数据

时间:2018-06-16 22:35:12

标签: ios arrays firebase tensorflow tensorflow-lite

我正在使用Swift,Firebase和Tensorflow来构建图像识别模型。我有一个经过重新训练的MobileNet模型,它将[1,224,224,3]的输入数组复制到我的Xcode包中,当我尝试从图像中添加数据作为输入时,我收到错误:Input 0 should have 602112 bytes, but found 627941 bytes.我我使用以下代码:

    let input = ModelInputs()
    do {
        let newImage = image.resizeTo(size: CGSize(width: 224, height: 224))

        let data = UIImagePNGRepresentation(newImage)

        // Store input data in `data`

        // ...
        try input.addInput(data)
        // Repeat as necessary for each input index
    } catch let error as NSError {
        print("Failed to add input: \(error.localizedDescription)")
    }



    interpreter.run(inputs: input, options: ioOptions) { outputs, error in
        guard error == nil, let outputs = outputs else {
            print(error!.localizedDescription)//ERROR BEING CALLED HERE
            return }
        // Process outputs
        print(outputs)
        // ...
    }

如何将图像数据重新处理为602112字节?我很困惑,如果有人可以请帮助我,这将是伟大的:)

1 个答案:

答案 0 :(得分:2)

请查看Swift中的Quick Start iOS演示应用程序,了解如何使用自定义TFLite模型:

https://github.com/firebase/quickstart-ios/tree/master/mlmodelinterpreter

特别是,我认为这就是你要找的东西:

https://github.com/firebase/quickstart-ios/blob/master/mlmodelinterpreter/MLModelInterpreterExample/UIImage%2BTFLite.swift#L47

祝你好运!