我已经查看了先前针对该问题的答案,但尚未解决。我正在从头开始实现YOLO算法(用于对象检测),并且在训练部分遇到了问题。
对于培训,我是tf.estimator API,并且正在使用类似于Tensorflow example中的CNN MNIST代码的代码。我收到以下错误:
Traceback (most recent call last):
File "recover_v3.py", line 663, in <module>
model.train(input_fn=train_input_fn, steps=1)
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 376, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1145, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1170, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1133, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "recover_v3.py", line 584, in cnn_model_fn
loss=loss, global_step=tf.train.get_global_step())
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 400, in minimize
grad_loss=grad_loss)
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 494, in compute_gradients
self._assert_valid_dtypes([loss])
File "/home/nyu-mmvc-019/miniconda3/envs/tf_0/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 872, in _assert_valid_dtypes
dtype = t.dtype.base_dtype
AttributeError: 'NoneType' object has no attribute 'dtype'
主文件中与丢失功能相关的代码如下所示(类似于CNN MNIST官方示例):
if mode == tf.estimator.ModeKeys.TRAIN:
# This gives the LOSS for each image in the batch.
# It is importing loss function from another file (called loss_fn)
# Apparently it returns None (not sure)
loss = loss_fn.loss_fn(logits, labels)
optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Wrap all of this in an EstimatorSpec.
spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=None)
return spec
先前的answers类似问题表明,损失函数未返回任何内容。但是,当我尝试使用随机生成的数组进行损失函数时,它可以正常工作并产生正常值。
此外,如果我从损失函数返回一个类似10.0的常量,我仍然会遇到相同的错误。
我不确定现在如何进行。另外,有什么方法可以打印损失函数返回的损失。显然,tf.estimator API本身会启动一个tensorflow会话,并且如果我尝试创建另一个会话(以打印损失函数返回的值),则会遇到其他错误。
答案 0 :(得分:0)
但是,当我尝试使用随机生成的数组进行损失函数时,它可以正常工作并产生正常值。
似乎您的input_fn有问题。您确定正确实施了吗?
此外,有什么方法可以打印损失函数返回的损失。
估计器每隔global_step%'save_summary_steps'在控制台中自动打印损失函数的值。您也可以使用标量摘要来跟踪损失函数,如下所示:
tf.summary.scalar('loss', loss)