冻结图比重新训练图的结果更差

时间:2018-09-25 00:46:01

标签: python tensorflow

我遵循了诗人(googlecodelabs)的基本张量流,但是随后尝试冻结图时,我在主要函数的末尾添加了以下内容:

 # Initialize all variables
init = tf.global_variables_initializer()
sess.run(init)
saver = tf.train.Saver()
saver.save(sess,'./pods.ckpt')
tf.train.write_graph(sess.graph.as_graph_def(), '.', 'pods.pbtxt', as_text=True)

gf = tf.GraphDef()
gf.ParseFromString(open('./tf_files/retrained_graph.pb','rb').read()) 
[print(n.name) for n in gf.node if n.op in ( 'Softmax','Placeholder')]

# freeze the graph
freeze_graph.freeze_graph('pods.pbtxt', "", False, 
                      './pods.ckpt', "final_result",
                       "save/restore_all", "save/Const:0",
                       'frozenpods.pb', True, ""  
                     )

在重新培训了添加到主函数末尾的内容之后,输出的最后一位看起来像这样:

INFO:tensorflow:2018-09-24 17:39:07.615455: Step 117: Validation accuracy = 100.0% (N=100)
INFO:tensorflow:Final test accuracy = 100.0% (N=28)
INFO:tensorflow:Froze 2 variables.
Converted 2 variables to const ops.
input
final_result
INFO:tensorflow:Restoring parameters from ./pods.ckpt
INFO:tensorflow:Froze 2 variables.
Converted 2 variables to const ops.
560 ops in the final graph.

我想知道为什么只有2个变量被冻结为const ops。另外,为什么使用冻结模型的分数准确性比原始生成的retrained_graph.pb文件低得多?

感谢您的帮助!

0 个答案:

没有答案