尝试将已保存的模型转换为tflite文件时,出现以下错误:
F tensorflow / contrib / lite / toco / tflite / export.cc:363]标准TensorFlow Lite运行时不支持该模型中的某些运算符。如果您有针对他们的自定义实现,则可以使用--allow_custom_ops或通过在调用tf.contrib.lite.toco_convert()时设置allow_custom_ops = True来禁用此错误。 以下是您需要自定义实现的运算符的列表:AsString,ParseExample 。\ n已中止(核心已转储)\ n' 没有
我正在使用DNN预制估算器。
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
IRIS_TRAINING = "iris_training.csv"
IRIS_TEST = "iris_test.csv"
INPUT_TENSOR_NAME = 'inputs'
def main():
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=IRIS_TRAINING,
target_dtype=np.int,
features_dtype=np.float32)
feature_columns = [tf.feature_column.numeric_column(INPUT_TENSOR_NAME, shape=[4])]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
# Define the training inputs
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={INPUT_TENSOR_NAME: np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)
# Train model.
classifier.train(input_fn=train_input_fn, steps=2000)
inputs = {'x': tf.placeholder(tf.float32, [4])}
tf.estimator.export.ServingInputReceiver(inputs, inputs)
saved_model=classifier.export_savedmodel(export_dir_base="/tmp/iris_model", serving_input_receiver_fn=serving_input_receiver_fn)
print(saved_model)
converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model)
tflite_model = converter.convert()
def serving_input_receiver_fn():
feature_spec = {INPUT_TENSOR_NAME: tf.FixedLenFeature(dtype=tf.float32, shape=[4])}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()
if __name__ == "__main__":
main()
虹膜文件可以通过以下链接下载:
IRIS_TRAINING FILE:“ http://download.tensorflow.org/data/iris_training.csv”
IRIS_TEST FILE:“ http://download.tensorflow.org/data/iris_test.csv”
答案 0 :(得分:1)
ParseExample 用于tf.estimator.export.build_parsing_serving_input_receiver_fn
方法中。
如果要避免这种情况,则应使用tf.estimator.export.build_raw_serving_input_receiver_fn
。
请记住,当您要对生成的SavedModel进行预测时,应设置signature_def_key="predict"
。
所以看起来像这样
predict_fn = predictor.from_saved_model(export_dir='tmp/...', signature_def_key="predict")