我有 2 ProtoBuf 文件,我目前通过调用 -
分别加载和转发每个文件out1=session.run(graph1out, feed_dict={graph1inp:inp1})
接着是
final=session.run(graph2out, feed_dict={graph2inp:out1})
其中 graph1inp 和 graph1out 是图1的输入节点和输出节点和图2
的类似术语现在,我想将 graph1out 与 graph2inp 连接起来,这样我只需要运行使用 inp1 提供 graph1inp 时, graph2out 。换句话说,连接2个相关图形的输入和输出张量,使得一次运行足以在两个经过训练的 ProtoBuf 文件上运行推理。
答案 0 :(得分:8)
假设您的Protobuf文件包含序列化的tf.GraphDef
个原型,您可以使用tf.import_graph_def()
的input_map
参数连接这两个图:
# Import graph1.
graph1_def = ... # tf.GraphDef object
out1_name = "..." # name of the graph1out tensor in graph1_def.
graph1out, = tf.import_graph_def(graph1_def, return_elements=[out_name])
# Import graph2 and connect it to graph1.
graph2_def = ... # tf.GraphDef object
inp2_name = "..." # name of the graph2inp tensor in graph2_def.
out2_name = "..." # name of the graph2out tensor in graph2_def.
graph2out, = tf.import_graph_def(graph2_def, input_map={inp2_name: graph1out},
return_elements=[out2_name])
答案 1 :(得分:5)
接受的答案会连接两个图表,但它不会恢复集合,全局和可训练的变量。经过详尽的搜索后,我找到了一个更好的解决方案:
import tensorflow as tf
from tensorflow.python.framework import meta_graph
with tf.Graph().as_default() as graph1:
input = tf.placeholder(tf.float32, (None, 20), name='input')
...
output = tf.identity(input, name='output')
with tf.Graph().as_default() as graph2:
input = tf.placeholder(tf.float32, (None, 20), name='input')
...
output = tf.identity(input, name='output')
graph = tf.get_default_graph()
x = tf.placeholder(tf.float32, (None, 20), name='input')
我们使用导出CollectionDef和tf.train.export_meta_graph
的{{1}}来导入它。这是连接发生的位置,特别是meta_graph.import_scoped_meta_graph
参数。
input_map
现在连接图表以及重新映射全局变量。
meta_graph1 = tf.train.export_meta_graph(graph=graph1)
meta_graph.import_scoped_meta_graph(meta_graph1, input_map={'input': x}), import_scope='graph1',
out1 = graph.get_tensor_by_name('graph1/output:0')
meta_graph2 = tf.train.export_meta_graph(graph=graph2)
meta_graph.import_scoped_meta_graph(meta_graph2, input_map={'input': out1}, import_scope='graph2')
您还可以直接从文件导入元图。