Tensorboard自述文件的Image Dashboard部分说:
由于图像仪表板支持任意png,您可以使用它将自定义可视化(例如matplotlib散点图)嵌入到TensorBoard中。
我看到如何将pyplot图像写入文件,作为张量读回,然后与tf.image_summary()一起使用将其写入TensorBoard,但是自述文件中的这一陈述表明存在更直接的方式。在那儿?如果是这样,是否有任何进一步的文档和/或示例如何有效地执行此操作?
答案 0 :(得分:40)
如果将图像放在内存缓冲区中,这很容易。下面,我展示了一个示例,其中将一个pyplot保存到缓冲区,然后转换为TF图像表示,然后将其发送到图像摘要。
import io
import matplotlib.pyplot as plt
import tensorflow as tf
def gen_plot():
"""Create a pyplot plot and save to buffer."""
plt.figure()
plt.plot([1, 2])
plt.title("test")
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
return buf
# Prepare the plot
plot_buf = gen_plot()
# Convert PNG buffer to TF image
image = tf.image.decode_png(plot_buf.getvalue(), channels=4)
# Add the batch dimension
image = tf.expand_dims(image, 0)
# Add image summary
summary_op = tf.summary.image("plot", image)
# Session
with tf.Session() as sess:
# Run
summary = sess.run(summary_op)
# Write summary
writer = tf.train.SummaryWriter('./logs')
writer.add_summary(summary)
writer.close()
这提供了以下TensorBoard可视化:
答案 1 :(得分:8)
我的回答有点晚了。使用tf-matplotlib,简单的散点图可归结为:
content := :F||counter;
执行时,会在Tensorboard中生成以下图表
请注意, tf-matplotlib 负责评估任何张量输入,避免import tensorflow as tf
import numpy as np
import tfmpl
@tfmpl.figure_tensor
def draw_scatter(scaled, colors):
'''Draw scatter plots. One for each color.'''
figs = tfmpl.create_figures(len(colors), figsize=(4,4))
for idx, f in enumerate(figs):
ax = f.add_subplot(111)
ax.axis('off')
ax.scatter(scaled[:, 0], scaled[:, 1], c=colors[idx])
f.tight_layout()
return figs
with tf.Session(graph=tf.Graph()) as sess:
# A point cloud that can be scaled by the user
points = tf.constant(
np.random.normal(loc=0.0, scale=1.0, size=(100, 2)).astype(np.float32)
)
scale = tf.placeholder(tf.float32)
scaled = points*scale
# Note, `scaled` above is a tensor. Its being passed `draw_scatter` below.
# However, when `draw_scatter` is invoked, the tensor will be evaluated and a
# numpy array representing its content is provided.
image_tensor = draw_scatter(scaled, ['r', 'g'])
image_summary = tf.summary.image('scatter', image_tensor)
all_summaries = tf.summary.merge_all()
writer = tf.summary.FileWriter('log', sess.graph)
summary = sess.run(all_summaries, feed_dict={scale: 2.})
writer.add_summary(summary, global_step=0)
线程问题并支持运行时关键绘图的blitting。
答案 2 :(得分:7)
下一个脚本不使用中间RGB / PNG编码。它还解决了执行期间额外操作构造的问题,重复使用单个摘要。
在执行期间,预计数字的大小将保持不变
有效的解决方案:
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
def get_figure():
fig = plt.figure(num=0, figsize=(6, 4), dpi=300)
fig.clf()
return fig
def fig2rgb_array(fig, expand=True):
fig.canvas.draw()
buf = fig.canvas.tostring_rgb()
ncols, nrows = fig.canvas.get_width_height()
shape = (nrows, ncols, 3) if not expand else (1, nrows, ncols, 3)
return np.fromstring(buf, dtype=np.uint8).reshape(shape)
def figure_to_summary(fig):
image = fig2rgb_array(fig)
summary_writer.add_summary(
vis_summary.eval(feed_dict={vis_placeholder: image}))
if __name__ == '__main__':
# construct graph
x = tf.Variable(initial_value=tf.random_uniform((2, 10)))
inc = x.assign(x + 1)
# construct summary
fig = get_figure()
vis_placeholder = tf.placeholder(tf.uint8, fig2rgb_array(fig).shape)
vis_summary = tf.summary.image('custom', vis_placeholder)
with tf.Session() as sess:
tf.global_variables_initializer().run()
summary_writer = tf.summary.FileWriter('./tmp', sess.graph)
for i in range(100):
# execute step
_, values = sess.run([inc, x])
# draw on the plot
fig = get_figure()
plt.subplot('111').scatter(values[0], values[1])
# save the summary
figure_to_summary(fig)
答案 3 :(得分:1)
这打算完成Andrzej Pronobis'回答。紧接着他的好帖子,我设置了这个最小的工作示例:
plt.figure()
plt.plot([1, 2])
plt.title("test")
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
image = tf.image.decode_png(buf.getvalue(), channels=4)
image = tf.expand_dims(image, 0)
summary = tf.summary.image("test", image, max_outputs=1)
writer.add_summary(summary, step)
编写者是tf.summary.FileWriter的实例。 这给了我以下错误: 属性错误:' Tensor'对象没有属性'值' this github post有解决方案:在添加到编写器之前,必须评估摘要(转换为字符串)。所以我的工作代码保持如下(只需在最后一行添加.eval()调用):
plt.figure()
plt.plot([1, 2])
plt.title("test")
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
image = tf.image.decode_png(buf.getvalue(), channels=4)
image = tf.expand_dims(image, 0)
summary = tf.summary.image("test", image, max_outputs=1)
writer.add_summary(summary.eval(), step)
这可能足够短,可以评论他的答案,但这些很容易被忽视(我可能会做其他不同的事情),所以在这里,希望它有所帮助!
干杯,
安德烈斯
答案 4 :(得分:1)
Matplotlib 图可以直接使用 add_figure
函数添加到张量板:
import numpy as np, matplotlib.pyplot as plt
from torch.utils.tensorboard import SummaryWriter
# Example plot
x = np.linspace(0,10)
plt.plot(x, np.sin(x))
# Adding plot to tensorboard
with SummaryWriter('runs/SO_test') as writer:
writer.add_figure('Fig1', plt.gcf())
# Loading tensorboard
%tensorboard --logdir=runs
答案 5 :(得分:0)
最后有一些official documentation关于“记录任意图像数据”的示例,并以matplotlib创建的图像为例。
答案 6 :(得分:0)
添加在PyTorch中有效的选项。我们将使用MatPlotLib图形,将其绘制到画布上,然后转换为numpy:
# make the canvas
figure = plt.figure(figsize=(10,10))
canvas = matplotlib.backends.backend_agg.FigureCanvas(figure)
# insert plotting code here; you can use imshow or subplot, etc.
for i in range(25):
plt.subplot(5, 5, i + 1, title=class_names[train_labels[i]])
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i], cmap=plt.cm.binary)
# convert canvas to figure
canvas.draw()
image = np.frombuffer(canvas.tostring_rgb(), dtype='uint8').reshape((1000,1000,3)).transpose((2, 0, 1))
结果可以直接添加到Tensorboard:
tensorboard.add_image('name', image, global_step)
答案 7 :(得分:-1)
Pytorch Lightning 中的解决方案
这不是完整的类,而是您必须添加的内容才能使其在框架中工作。
import pytorch_lightning as pl
import seaborn as sn
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image
def __init__(self, config, trained_vae, latent_dim):
self.val_confusion = pl.metrics.classification.ConfusionMatrix(num_classes=self._config.n_clusters)
self.logger: Optional[TensorBoardLogger] = None
def forward(self, x):
...
return log_probs
def validation_step(self, batch, batch_index):
if self._config.dataset == "mnist":
orig_batch, label_batch = batch
orig_batch = orig_batch.reshape(-1, 28 * 28)
log_probs = self.forward(orig_batch)
loss = self._criterion(log_probs, label_batch)
self.val_confusion.update(log_probs, label_batch)
return {"loss": loss, "labels": label_batch}
def validation_step_end(self, outputs):
return outputs
def validation_epoch_end(self, outs):
tb = self.logger.experiment
# confusion matrix
conf_mat = self.val_confusion.compute().detach().cpu().numpy().astype(np.int)
df_cm = pd.DataFrame(
conf_mat,
index=np.arange(self._config.n_clusters),
columns=np.arange(self._config.n_clusters))
plt.figure()
sn.set(font_scale=1.2)
sn.heatmap(df_cm, annot=True, annot_kws={"size": 16}, fmt='d')
buf = io.BytesIO()
plt.savefig(buf, format='jpeg')
buf.seek(0)
im = Image.open(buf)
im = torchvision.transforms.ToTensor()(im)
tb.add_image("val_confusion_matrix", im, global_step=self.current_epoch)
和电话
logger = TensorBoardLogger(save_dir=tb_logs_folder, name='Classifier')
trainer = Trainer(
default_root_dir=classifier_checkpoints_path,
logger=logger,
)