尝试在tensorflow上使用RMSPropOptimizer时出现FailedPreconditionError

时间:2016-02-24 00:33:22

标签: tensorflow gradient-descent

我正在尝试使用RMSPropOptimizer来减少损失。以下是相关代码的一部分:

import tensorflow as tf

#build large convnet...
#...

opt = tf.train.RMSPropOptimizer(learning_rate=0.0025, decay=0.95)

#do stuff to get targets and loss...
#...

grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(g, -1, 1), v) for g, v in grads_and_vars]
opt_op = self.opt.apply_gradients(capped_grads_and_vars)

sess = tf.Session()
sess.run(tf.initialize_all_variables())
while(1):
    sess.run(opt_op)

问题是我运行时遇到以下错误:

W tensorflow/core/common_runtime/executor.cc:1091] 0x10a0bba40 Compute status: Failed precondition: Attempting to use uninitialized value train/output/bias/RMSProp
     [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]]
     [[Node: _send_MergeSummary/MergeSummary_0 = _Send[T=DT_STRING, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=-6901001318975381332, tensor_name="MergeSummary/MergeSummary:0", _device="/job:localhost/replica:0/task:0/cpu:0"](MergeSummary/MergeSummary)]]
Traceback (most recent call last):
  File "dqn.py", line 213, in <module>
    result = sess.run(opt_op)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 385, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 461, in _do_run
    e.code)
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value train/output/bias/RMSProp
     [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]]
Caused by op u'RMSProp/update_train/output/bias/ApplyRMSProp', defined at: 
  File "dqn.py", line 159, in qLearnMinibatch
    opt_op = self.opt.apply_gradients(capped_grads_and_vars)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 288, in apply_gradients
    update_ops.append(self._apply_dense(grad, var))
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/rmsprop.py", line 103, in _apply_dense
    grad, use_locking=self._use_locking).op
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/gen_training_ops.py", line 171, in apply_rms_prop
    grad=grad, use_locking=use_locking, name=name)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 659, in apply_op
    op_def=op_def)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1904, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1083, in __init__
    self._traceback = _extract_stack()

请注意,如果使用通常的GradientDescentOptimizer,我不会收到此错误。我正在初始化我的变量,你可以在上面看到,但我不知道'train / output / bias / RMSProp'是什么,因为我没有创建任何这样的变量。我只有'train / output / bias /',它已在上面初始化。

谢谢!

1 个答案:

答案 0 :(得分:3)

因此,对于未来遇到类似问题的人,我发现此帖有用: Tensorflow: Using Adam optimizer

基本上,我正在运行

sess.run(tf.initialize_all_variables()) 

在我定义损失最小化之前

loss = tf.square(targets)
#create the gradient descent op
grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(g, -self.clip_delta, self.clip_delta), v) for g, v in grads_and_vars]    #gradient capping
self.opt_op = self.opt.apply_gradients(capped_grads_and_vars)

这需要在运行初始化操作之前完成!