tensorflow 1.13.0和1.14.0rc0:将模型提供给tflite

时间:2019-06-06 14:15:08

标签: tensorflow tensorflow-serving

我正在尝试将bert服务模型转换为其等效的public class Word { public string word; private int typeIndex; // Checks for current letter private int wordIndex; // index of word used from WordGenerator WordDisplay display; public Word(WordDisplay _display) // Displays the word as Hiragana / Katakana { wordIndex = WordGenerator.GetIndex(); // now it will only be called ONCE per instance word = _WordGenerator.wordList_Romaji[wordIndex]; // you can get the equivalent? letters similarly...with something like: word = _WordGenerator.wordList_Hiragana[wordIndex]; display = _display; display.SetWord(word); } // ... other existing code ... } 格式。

用于转换为tflite格式的代码段为:

tflite

如果输入形状为import os import tensorflow as tf cur_dir = os.getcwd() saved_model_dir = os.path.join(cur_dir, 'serving_dir') converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]#, #tf.lite.OpsSet.TFLITE_BUILTINS] tflite_model = converter.convert() open(os.path.join(cur_dir, 'cnv1.tflite'), 'wb').write(tflite_model) ,则出现以下错误:

None

批处理大小已在代码中指定。如果我将形状定为Traceback (most recent call last): File "temp.py", line 13, in <module> tflite_model = converter.convert() File "/media/data2/anaconda3/envs/tf-lite/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 406, in convert "'{0}'.".format(_tensor_name(tensor))) ValueError: Provide an input shape for input array 'input_example_tensor'. ,则可以转换为等效的(10,)。为了对生成的模型(tflite进行推断,已使用here中的代码段。但是tflite抛出以下内容:

interpreter.allocate_tensors()

有人在github上报告了错误。为了解决这个问题,jdduke发表了评论:

  

我们希望在1.14版本中解决该问题。

我已经尝试Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/media/data2/anaconda3/envs/tf-lite/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py", line 73, in allocate_tensors return self._interpreter.AllocateTensors() File "/media/data2/anaconda3/envs/tf-lite/lib/python3.6/site-packages/tensorflow/lite/python/interpreter_wrapper/tensorflow_wrap_interpreter_wrapper .py", line 106, in AllocateTensors return _tensorflow_wrap_interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you invoke the Flex delegate before inference.Node number 0 (Fle x) failed to prepare. 来获取以下内容:

1.14.0rc0

0 个答案:

没有答案