我在这里遵循代码https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py来构建一个多层感知器来解决MNIST问题。
在以下代码中,
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples/batch_size)
# Loop over all batches
for i in range(total_batch):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([train_op, loss_op], feed_dict={X: batch_x,
Y: batch_y})
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost={:.9f}".format(avg_cost))
print("Optimization Finished!")
我想记录每次迭代的准确性,仍然使用sess.run
;我怎么能这样做?
答案 0 :(得分:2)
您是否有测量训练准确度的代码?你也需要运行那个块。将它粘贴在Loop over all batches
块的底部,以便它在每次迭代时运行。
如果您想要损失而不是准确度,那么只需在该位置打印avg_cost
即可。如果您希望为每个纪元打印丢失,而不是每次迭代,则删除模数条件if epoch % display_step == 0:
并在此之后取消缩进print
。
其中一个是否符合您的需求?