我们通过冻结图来保存它们,从而从TF1部署了许多模型:
tf.train.write_graph(self.session.graph_def, some_path)
# get graph definitions with weights
output_graph_def = tf.graph_util.convert_variables_to_constants(
self.session, # The session is used to retrieve the weights
self.session.graph.as_graph_def(), # The graph_def is used to retrieve the nodes
output_nodes, # The output node names are used to select the usefull nodes
)
# optimize graph
if optimize:
output_graph_def = optimize_for_inference_lib.optimize_for_inference(
output_graph_def, input_nodes, output_nodes, tf.float32.as_datatype_enum
)
with open(path, "wb") as f:
f.write(output_graph_def.SerializeToString())
然后通过以下方式加载它们:
with tf.Graph().as_default() as graph:
with graph.device("/" + args[name].processing_unit):
tf.import_graph_def(graph_def, name="")
for key, value in inputs.items():
self.input[key] = graph.get_tensor_by_name(value + ":0")
我们想以类似的方式保存TF2模型。一个protobuf文件,其中将包含图形和权重。我该如何实现?
我知道有一些保存方法:
keras.experimental.export_saved_model(model, 'path_to_saved_model')
这是实验性的,会创建多个文件:(。
model.save('path_to_my_model.h5')
保存h5格式的文件:(。
tf.saved_model.save(self.model, "test_x_model")
再次保存多个文件:(。
答案 0 :(得分:2)
上面的代码有点旧。当转换vgg16时,它可以成功,但是在转换resnet_v2_50模型时失败。我的tf版本是tf 2.2.0 最后,我找到了一个有用的代码段:
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
import numpy as np
#set resnet50_v2 as a example
model = tf.keras.applications.ResNet50V2()
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(
tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))
# Get frozen ConcreteFunction
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()
layers = [op.name for op in frozen_func.graph.get_operations()]
print("-" * 50)
print("Frozen model layers: ")
for layer in layers:
print(layer)
print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)
# Save frozen graph from frozen ConcreteFunction to hard drive
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
logdir="./frozen_models",
name="frozen_graph.pb",
as_text=False)
ref:https://github.com/leimao/Frozen_Graph_TensorFlow/tree/master/TensorFlow_v2(更新)
答案 1 :(得分:2)
我遇到了类似的问题,并在下面找到了解决方案,
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.python.tools import optimize_for_inference_lib
loaded = tf.saved_model.load('models/mnist_test')
infer = loaded.signatures['serving_default']
f = tf.function(infer).get_concrete_function(
flatten_input=tf.TensorSpec(shape=[None, 28, 28, 1],
dtype=tf.float32)) # change this line for your own inputs
f2 = convert_variables_to_constants_v2(f)
graph_def = f2.graph.as_graph_def()
if optimize :
# Remove NoOp nodes
for i in reversed(range(len(graph_def.node))):
if graph_def.node[i].op == 'NoOp':
del graph_def.node[i]
for node in graph_def.node:
for i in reversed(range(len(node.input))):
if node.input[i][0] == '^':
del node.input[i]
# Parse graph's inputs/outputs
graph_inputs = [x.name.rsplit(':')[0] for x in frozen_func.inputs]
graph_outputs = [x.name.rsplit(':')[0] for x in frozen_func.outputs]
graph_def = optimize_for_inference_lib.optimize_for_inference(graph_def,
graph_inputs,
graph_outputs,
tf.float32.as_datatype_enum)
# Export frozen graph
with tf.io.gfile.GFile('optimized_graph.pb', 'wb') as f:
f.write(graph_def.SerializeToString())
答案 2 :(得分:0)
我目前的操作方式是TF2-> SavedModel(通过keras.experimental.export_saved_model
)-> Frozen_graph.pb(通过freeze_graph
工具,该工具可以将SavedModel
作为输入)。我不知道这是否是“推荐”的方式。
此外,我仍然不知道如何加载冻结的模型并以“ TF2方式”(也就是没有图形,会话等)运行推理。
您还可以查看似乎会生成检查点文件的keras.save_model('path', save_format='tf')
(尽管您仍然需要冻结它们,所以我个人认为保存的模型路径会更好)
答案 3 :(得分:0)
我使用TF2转换模型,例如:
keras.callbacks.ModelCheckpoint(save_weights_only=True)
传递到model.fit
并保存checkpoint
; self.model.load_weights(self.checkpoint_path)
加载checkpoint
,并转换为h5
:self.model.save(h5_path, overwrite=True, include_optimizer=False)
; h5
转换为pb
:import logging
import tensorflow as tf
from tensorflow.compat.v1 import graph_util
from tensorflow.python.keras import backend as K
from tensorflow import keras
# necessary !!!
tf.compat.v1.disable_eager_execution()
h5_path = '/path/to/model.h5'
model = keras.models.load_model(h5_path)
model.summary()
# save pb
with K.get_session() as sess:
output_names = [out.op.name for out in model.outputs]
input_graph_def = sess.graph.as_graph_def()
for node in input_graph_def.node:
node.device = ""
graph = graph_util.remove_training_nodes(input_graph_def)
graph_frozen = graph_util.convert_variables_to_constants(sess, graph, output_names)
tf.io.write_graph(graph_frozen, '/path/to/pb/model.pb', as_text=False)
logging.info("save pb successfully!")