我想实现Cyclic LR方法(用于找到最佳学习率边界),这需要我绘制学习率与准确度。但是现在,我似乎无法做到这一点。在训练模型时,下面是部分代码,它要么绘制一个空图,要么给我空列表,而且我不确定我是否误解了TF语言中的某些内容。
对代码(credits)进行了一些详细说明,略高于此代码,我创建了acc_list = []
和lr_list = []
。应该使用值填充这两个列表,模型执行每个全局步骤。所以我想将这些值附加到列表中,当模型完成时,在图中绘制这两个列表,以查找学习速率边界。
我是否需要做更多'tf-coding'?现在我认为运行sess
就足够了,因为这也会提供current learning rate
和current accuracy
,因此会显示值。
def run():
#Run the managed session
with sv.managed_session() as sess:
for step in range(num_steps_per_epoch * num_epochs):
#At the start of every epoch, show the vital information:
if step % num_batches_per_epoch == 0:
logging.info('Epoch %s/%s', step/num_batches_per_epoch + 1, num_epochs)
learning_rate_value, accuracy_value = sess.run([lr1, accuracy])
logging.info('Current Learning Rate: %s', learning_rate_value)
logging.info('Current Streaming Accuracy: %s', accuracy_value)
#Log the summaries every 10 steps.
if step % 10 == 0:
loss, _ = train_step(sess, train_op, sv.global_step)
summaries = sess.run(my_summary_op)
sv.summary_computed(sess, summaries)
iteration_step += 1
#Run training if not 10 steps
else:
loss, _ = train_step(sess, train_op, sv.global_step)
iteration_step += 1
lr_list.append(sess.run([lr1]))
acc_list.append(sess.run([accuracy]))
#We log the final training loss and accuracy
logging.info('Final Loss: %s', loss)
logging.info('Final Accuracy: %s', sess.run(accuracy))
plt.plot(lr_list, acc_list)
#Once all the training has been done, save the log files and checkpoint model
logging.info('Finished training! Saving model to disk now.')
sv.saver.save(sess, sv.save_path, global_step = sv.global_step)
if __name__ == '__main__':
run()
答案 0 :(得分:0)
这里没有特定的TF。 lr_list.append(sess.run([lr1]))
确实会将lr1
张量的当前值附加到lr_list
。在这一点上它是纯Python。如果列表末尾为空,请像调整任何常规python代码一样进行调试...例如确保这条线实际上达到了你预期的次数,确保没有其他人在你添加东西和绘制它的值之间改变列表。