我正在关注tutorial from codelabs。他们使用此脚本来优化模型
python -m tensorflow.python.tools.optimize_for_inference \
--input=tf_files/retrained_graph.pb \
--output=tf_files/optimized_graph.pb \
--input_names="input" \
--output_names="final_result"
他们使用此脚本验证optimized_graph.pb
python -m scripts.label_image \
--graph=tf_files/optimized_graph.pb \
--image=tf_files/flower_photos/daisy/3475870145_685a19116d.jpg
问题是我尝试对自己的代码使用optimize_for_inference
,该代码不用于图像分类。
以前,在优化之前,我使用此脚本通过对示例数据进行测试来验证我的模型:
import tensorflow as tf
from tensorflow.contrib import predictor
from tensorflow.python.platform import gfile
import numpy as np
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name="prefix")
input_name = graph.get_operations()[0].name+':0'
output_name = graph.get_operations()[-1].name+':0'
return graph, input_name, output_name
def predict(model_path, input_data):
# load tf graph
tf_model,tf_input,tf_output = load_graph(model_path)
x = tf_model.get_tensor_by_name(tf_input)
y = tf_model.get_tensor_by_name(tf_output)
model_input = tf.train.Example(
features=tf.train.Features(feature={
"thisisinput": tf.train.Feature(float_list=tf.train.FloatList(value=input_data)),
}))
model_input = model_input.SerializeToString()
num_outputs = 3
predictions = np.zeros(num_outputs)
with tf.Session(graph=tf_model) as sess:
y_out = sess.run(y, feed_dict={x: [model_input]})
predictions = y_out
return predictions
if __name__=="__main__":
input_data = [4.7,3.2,1.6,0.2] # my model recieve 4 inputs
print(np.argmax(predict("not_optimized_model.pb",x)))
但是优化模型后,我的测试脚本无法正常工作。会引发错误:
ValueError:节点import / ParseExample / ParseExample的输入0为 从import / inputtensors传递的float:0与预期的不兼容 字符串。
所以我的问题是优化模型后如何验证我的模型?我不能像教程一样使用--image
命令。
答案 0 :(得分:0)
我已通过导出模型时用tf.float32
更改占位符的类型来解决该错误:
def my_serving_input_fn():
input_data = {
"featurename" : tf.placeholder(tf.float32, [None, 4], name='inputtensors')
}
return tf.estimator.export.ServingInputReceiver(input_data, input_data)
,然后将上面的prediction
函数更改为:
def predict(model_path, input_data):
# load tf graph
tf_model, tf_input, tf_output = load_graph(model_path)
x = tf_model.get_tensor_by_name(tf_input)
y = tf_model.get_tensor_by_name(tf_output)
num_outputs = 3
predictions = np.zeros(num_outputs)
with tf.Session(graph=tf_model) as sess:
y_out = sess.run(y, feed_dict={x: [input_data]})
predictions = y_out
return predictions
冻结模型后,上面的预测代码将起作用。但不幸的是it raises another error when trying to load pb directly after exporting the model。