是否可以累积评估/测试集的tf.summary数据? (以非黑客的方式)
非hacky我的意思是比我目前的解决方案更聪明:
# init writer
writer = tf.summary.FileWriter(path, graph)
# build model
...
# add stuff
for v in tf.trainable_variables():
tf.summary.histogram(v.name, v)
tf.summary.scalar("loss", loss)
...
# merge summary
merged = tf.summary.merge_all()
# during training --- everything fine, since we operate per mini-batch only
summary, _ = session.run([merged, optimizer_op],
feed_dict={X: train_batch, Y: train_batch_labels}
writer.add_summary(summary, train_step)
# test eval
# here it does get ugly, because we need to buffer every mini-batch
# in the whole test set in order to get accu, loss, ... for the whole
# test set and not only per mini-batch
for batch in test_data:
loss, accuracy = session.run([lossop, accuop],
feed_dict={X: batch, Y: batch_labels})
buffer_loss.append(loss)
buffer_accuracy.append(accuracy)
# this includes a new filewriter for test evaluation
# plus a new operation that calcs the mean over both buffers
# plus a new summary for the calculated means
# plus writing that data
虽然这有效但它也是一个糟糕的解决方案,因为我必须为测试集评估重新创建摘要数据,并在Python本身中使用外部缓冲区迭代整个测试集的每个小批量,只是为了将完成的缓冲区传递回TensorFlow,最终获得所有测试批次的平均值。