我正在使用Google AutoML Vision导出的分类模型,因此我只有saved_model.pb
,没有变量,检查点等。
我想将此模型图加载到本地TensorFlow安装中,用于推理并继续训练更多图片。
主要问题:
该计划是否可行,即使用不带变量,检查点等的单个saved_model.pb
并使用新数据训练结果图?
如果是:如何将图像编码为字符串的输入形状为(?,)
?
理想情况下,展望未来:培训部分需要考虑什么重要事项?
有关代码的背景信息:
要读取图像,我使用与使用Docker容器进行推理时使用的相同方法,因此使用base64编码的图像。
要加载图形,我通过saved_model_cli show --dir input/model
的CLI(serve
)检查了图形需要设置的标记。
要获取输入张量名称,我使用graph.get_operations()
,它为 image_bytes 提供Placeholder:0
,为 key提供Placeholder:1_0
。 em>(只是一个任意字符串标识图像)。两者都具有维度dim -1
import tensorflow as tf
import numpy as np
import base64
path_img = "input/testimage.jpg"
path_mdl = "input/model"
# input to network expected to be base64 encoded image
with io.open(path_img, 'rb') as image_file:
encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
# reshaping to (1,) as the expecte dimension is (?,)
feed_dict_option1 = {
"Placeholder:0": { np.array(str(encoded_image)).reshape(1,) },
"Placeholder_1:0" : "image_key"
}
# reshaping to (1,1) as the expecte dimension is (?,)
feed_dict_option2 = {
"Placeholder:0": np.array(str(encoded_image)).reshape(1,1),
"Placeholder_1:0" : "image_key"
}
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], path_mdl)
graph = tf.get_default_graph()
sess.run('scores:0',
feed_dict=feed_dict_option1)
sess.run('scores:0',
feed_dict=feed_dict_option2)
输出:
# for input reshaped to (1,)
ValueError: Cannot feed value of shape (1,) for Tensor 'Placeholder:0', which has shape '(?,)'
# for input reshaped to (1,1)
ValueError: Cannot feed value of shape (1, 1) for Tensor 'Placeholder:0', which has shape '(?,)'
如何获得(?,)
的输入形状?
非常感谢。
答案 0 :(得分:3)
是的!可能我有一个应该相似的对象检测模型,我可以在tensorflow 1.14.0中按以下方式运行它:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'encoded_image_string_tensor:0': inp})
我使用netron查找输入内容。
在tensorflow 2.0中甚至更容易:
import cv2
cv2.imread(filepath)
flag, bts = cv.imencode('.jpg', img)
inp = [bts[:,0].tobytes()]
saved_model_dir = '.'
loaded = tf.saved_model.load(export_dir=saved_model_dir)
infer = loaded.signatures["serving_default"]
out = infer(key=tf.constant('something_unique'), image_bytes=tf.constant(inp))
saved_model.pb
也不是frozen_inference_graph.pb
,请参见:What is difference frozen_inference_graph.pb and saved_model.pb?