我正在尝试从TensorFlow会话中可视化生成的摘要数据。 我已经通过TensorBoard检查功能确认了摘要确实存储了:
tensorboard --logdir=C:\ML\tensorflow_logs --port 6006 --inspect
======================================================================
Processing event files... (this can take a few minutes)
======================================================================
Found event files in:
C:\ML\tensorflow_logs
These tags are in C:\ML\tensorflow_logs:
audio -
histograms -
images -
scalars
LossValue
accuracy_1
tensor -
======================================================================
Event statistics for C:\ML\tensorflow_logs:
audio -
graph
first_step 0
last_step 0
max_step 0
min_step 0
num_steps 1
outoforder_steps []
histograms -
images -
scalars
first_step 0
last_step 100
max_step 100
min_step 0
num_steps 101
outoforder_steps []
sessionlog:checkpoint -
sessionlog:start -
sessionlog:stop -
tensor -
======================================================================
但是,如果我启动TensorBoard(没有--inspect参数)并在浏览器中打开网站(在这种情况下是Chrome),我只能看到Graph而不是Scalars。对于Scalars,它只是说:
我在Windows上使用Anaconda,使用最新版本的TensorFlow和TensorBoard(0.1.8)。
我使用的与摘要生成相关的代码如下所示:
with graph.as_default():
.....
.....
tf.summary.scalar("LossValue", loss)
tf.summary.scalar("Accuracy", accuracy_measure)
with tf.Session(graph=graph) as session:
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter('C:/ML/tensorflow_logs', session.graph)
tf.global_variables_initializer().run()
for step in range(train_steps):
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
batch_data = train_dataset[offset:(offset + batch_size), :, :, :]
batch_labels = train_labels[offset:(offset + batch_size), :]
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob : dropout_keep_prob}
_, l, predictions, summary = session.run([optimizer, loss, train_prediction, merged], feed_dict=feed_dict)
writer.add_summary(summary, step)
writer.close()
答案 0 :(得分:0)
尝试运行pip install tensorflow-tensorboard