这是Beholder Plugin,它允许所有可训练变量的可视化(对大规模深度网络有明显的限制)。
我的问题是我正在使用tf.estimator.Estimator
类运行我的培训,看来Beholder插件与Estimator
API不能很好地匹配。
我的代码如下所示:
# tf.data input pipeline setup
def dataset_input_fn(train=True):
filenames = ... # training files
if not train:
filenames = ... # test files
dataset = tf.data.TFRecordDataset(filenames), "GZIP")
# ... and so on until ...
iterator = batched_dataset.make_one_shot_iterator()
return iterator.get_next()
def train_input_fn():
return dataset_input_fn(train=True)
def test_input_fn():
return dataset_input_fn(train=False)
# model function
def cnn(features, labels, mode, params):
# build model
# Provide an estimator spec for `ModeKeys.PREDICT`.
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"sentiment": y_pred_cls})
eval_metric_ops = {
"accuracy": accuracy_op,
"precision": precision_op,
"recall": recall_op
}
normal_summary_hook = tf.train.SummarySaverHook(
100,
summary_op=summary_op)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=cost_op,
train_op=train_op,
eval_metric_ops=eval_metric_ops,
training_hooks=[normal_summary_hook]
)
classifier = tf.estimator.Estimator(model_fn=cnn,
params=...,
model_dir=...)
classifier.train(input_fn=train_input_fn, steps=1000)
ev = classifier.evaluate(input_fn=test_input_fn, steps=1000)
tf.logging.info("Loss: {}".format(ev["loss"]))
tf.logging.info("Precision: {}".format(ev["precision"]))
tf.logging.info("Recall: {}".format(ev["recall"]))
tf.logging.info("Accuracy: {}".format(ev["accuracy"]))
我无法弄清楚在此设置中添加beholder挂钩的位置。
如果我在cnn
函数中将其添加为训练钩:
return tf.estimator.EstimatorSpec(
mode=mode,
loss=dnn.cost,
train_op=dnn.train_op,
eval_metric_ops=eval_metric_ops,
training_hooks=[normal_summary_hook, beholder_hook]
)
然后我得到InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype uint8 and shape [?,?,?]
。
如果我尝试使用tf.train.MonitoredTrainingSession
来设置classifier
,则培训会正常进行,但不会将任何内容记录到beholder插件中。看一下stdout,我看到一个接一个地创建了两个会话,所以当你创建一个tf.estimator.Estimator
分类器时,它会在终止任何现有会话后旋转它自己的会话。
有没有人有任何想法?
答案 0 :(得分:1)
编辑帖子:
这是旧的tensorflow版本的问题。幸运的是,该问题已在tensorflow 1.9版中修复!以下代码将Beholder与tf.estimator.Estimator结合使用。它产生的错误与您在旧版本中提到的错误相同,但在1.9版中一切正常!
from capser_7_model_fn import *
from tensorflow.python import debug as tf_debug
from tensorflow.python.training import basic_session_run_hooks
from tensorboard.plugins.beholder import Beholder
from tensorboard.plugins.beholder import BeholderHook
import logging
# create estimator for model (the model is described in capser_7_model_fn)
capser = tf.estimator.Estimator(model_fn=model_fn, params={'model_batch_size': batch_size}, model_dir=LOGDIR)
# train model
logging.getLogger().setLevel(logging.INFO) # to show info about training progress in the terminal
beholder = Beholder(LOGDIR)
beholder_hook = BeholderHook(LOGDIR)
capser.train(input_fn=train_input_fn, steps=n_steps, hooks=[beholder_hook])
另一方面是,我需要为摘要编写器,tensorboard命令行调用和BeholderHook指定完全相同的LOGDIR。之前,为了比较模型的不同运行,我先在LOGDIR / run_1,然后在LOGDIR / run_2等中编写了不同运行的摘要,即:
capser = tf.estimator.Estimator(model_fn=model_fn, params={'model_batch_size': batch_size}, model_dir=LOGDIR/run_n)
我用过
tensorboard -logdir=LOGDIR
启动张量板,我用
beholder_hook = BeholderHook(LOGDIR)
写入旁观者数据。在这种情况下,旁观者找不到所需的数据。我需要做的是为所有内容指定完全相同的LOGDIR。也就是说,在代码中:
capser = tf.estimator.Estimator(model_fn=model_fn, params={'model_batch_size': batch_size}, model_dir=LOGDIR+'/run_n')
beholder_hook = BeholderHook(LOGDIR+'/run_n')
并在终端中启动tensorboard:
tensorboard -logdir=LOGDIR+'/run_n'
希望有帮助。