我正在使用Google Collab,使用自定义输出来训练序列模型(NASNet)。我使用model.save()
方法将模型导出到h5。我使用单独的ipynb加载h5并将其转换为pb,但是,生成的pb缺少命名输出节点,因此无法从生成的模型中获得预测。
我的模型序列定义:
model = tf.keras.Sequential([
base_model,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(num_labels, activation = 'sigmoid', name="output")])
model.summary()
我能够在协作中编译模型,训练和进行预测。
我这样导出模型:
model.save(model_path)
要转换为Tensorflow PB,我使用了一个新的ipynb,并且能够“成功”运行转换(即,它不会失败)。
该转换代码似乎已在其他地方广泛使用。在这里:
import tensorflow as tf
import tensorflow.keras.backend as K
K.set_learning_phase(0)
restored_model = tf.keras.models.load_model(model_path)
print(restored_model.outputs)
print(restored_model.inputs)
def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
"""
Freezes the state of a session into a pruned computation graph.
Creates a new computation graph where variable nodes are replaced by
constants taking their current value in the session. The new graph will be
pruned so subgraphs that are not necessary to compute the requested
outputs are removed.
@param session The TensorFlow session to be frozen.
@param keep_var_names A list of variable names that should not be frozen,
or None to freeze all the variables in the graph.
@param output_names Names of the relevant graph outputs.
@param clear_devices Remove the device directives from the graph for better portability.
@return The frozen graph definition.
"""
from tensorflow.python.framework.graph_util import convert_variables_to_constants
graph = session.graph
with graph.as_default():
freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
output_names = output_names or []
output_names += [v.op.name for v in tf.global_variables()]
# Graph -> GraphDef ProtoBuf
input_graph_def = graph.as_graph_def()
if clear_devices:
for node in input_graph_def.node:
node.device = ""
frozen_graph = convert_variables_to_constants(session, input_graph_def,
output_names, freeze_var_names)
return frozen_graph
frozen_graph = freeze_session(K.get_session(),
output_names=[out.op.name for out in restored_model.outputs],
clear_devices=True)
tf.train.write_graph(frozen_graph, "/tmp", model_name+".pb", as_text=False)
output_model_name = model_name + ".pb"
output_model_path = "/tmp/" + output_model_name
乍看之下,生成的pb看起来格式正确,但是Netron和其他工具表明它在图中没有最终的输出节点。
我能够通过以下方式验证我的tf.keras h5模型在第一个ipynb中是否具有输入和输出:
print(model.input.op.name)
print(model.output.op.name)
哪个返回:
NASNet_input
output/Identity
当我在准备中加载保存的h5以转换为Tensorflow原型时,我还要检查inout和输出名称:
print(restored_model.outputs)
print(restored_model.inputs)
以TF命名约定返回的内容:
[<tf.Tensor 'output/Sigmoid:0' shape=(?, 229) dtype=float32>]
[<tf.Tensor 'NASNet_input:0' shape=(?, 224, 224, 3) dtype=float32>]
因此很明显,我的h5似乎有一个名为“输出”的输出节点-正如我在检查H5时在Netron中确认的那样。
为什么我的转换代码似乎要删除pb图中的输出节点?
谢谢!
答案 0 :(得分:0)
因此,tf 2.0似乎解决了该错误(请参阅https://github.com/tensorflow/tensorflow/issues/26809),这无济于事,因为1.14可以导出为有效的protobuf(我需要,我不能在2.0中使用SaveModel新格式,因为coreML转换器无法读取它,而onnx转换器因其他原因而死在此保存的模型上)-因此,整个尝试似乎是死路一条。
有人可以严重地不将TF 2.0 Keras模型导出到与1.14运行时兼容的Protobuffers吗?