张量流中未初始化的变量

时间:2017-07-06 07:22:23

标签: machine-learning tensorflow training-data

我正在尝试编写一个机器学习程序。我们的想法是训练一个可以用RMSProp训练的模型(在 q_model 中定义)。我在这里报告了一个非常简化的代码版本,但是没有用。

import tensorflow as tf
import numpy as np

#--------------------------------------
# Model definition
#--------------------------------------

# Let's use a simple nn for the Q value function

W = tf.Variable(tf.random_normal([3,10],dtype=tf.float64), name='W')
b = tf.Variable(tf.random_normal([10],dtype=tf.float64), name='b')

def q_model(X,A):
    input = tf.concat((X,A), axis=1)
    return tf.reduce_sum( tf.nn.relu(tf.matmul(input, W) + b), axis=1)

#--------------------------------------
# Model and model initializer
#--------------------------------------

optimizer = tf.train.RMSPropOptimizer(0.9)
init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

#--------------------------------------
# Learning
#--------------------------------------

x = np.matrix(np.random.uniform((0.,0.),(1.,1.), (1000,2)))
a = np.matrix(np.random.uniform((0),(1), 1000)).T
y = np.matrix(np.random.uniform((0),(1), 1000)).T

y_batch , x_batch, a_batch = tf.placeholder("float64",shape=(None,1), name='y'), tf.placeholder("float64",shape=(None,2), name='x'), tf.placeholder("float64",shape=(None,1), name='a')
error = tf.reduce_sum(tf.square(y_batch - q_model(x_batch,a_batch))) / 100.
train = optimizer.minimize(error)

indx = range(1000)
for i in range(100):
    # batches
    np.random.shuffle(indx)
    indx = indx[:100]
    print sess.run({'train':train}, feed_dict={'x:0':x[indx],'a:0':a[indx],'y:0':y[indx]})

错误是:

Traceback (most recent call last):
  File "/home/samuele/Projects/GBFQI/test/tf_test.py", line 45, in <module>
    print sess.run({'train':train}, feed_dict={'x:0':x[indx],'a:0':a[indx],'y:0':y[indx]})
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run
    feed_dict_string, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run
    target_list, options, run_metadata)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value b/RMSProp
     [[Node: RMSProp/update_b/ApplyRMSProp = ApplyRMSProp[T=DT_DOUBLE, _class=["loc:@b"], use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](b, b/RMSProp, b/RMSProp_1, RMSProp/update_b/Cast, RMSProp/update_b/Cast_1, RMSProp/update_b/Cast_2, RMSProp/update_b/Cast_3, gradients/add_grad/tuple/control_dependency_1)]]

Caused by op u'RMSProp/update_b/ApplyRMSProp', defined at:
  File "/home/samuele/Projects/GBFQI/test/tf_test.py", line 38, in <module>
    train = optimizer.minimize(error)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 325, in minimize
    name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 456, in apply_gradients
    update_ops.append(processor.update_op(self, grad))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 97, in update_op
    return optimizer._apply_dense(g, self._v)  # pylint: disable=protected-access
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/rmsprop.py", line 140, in _apply_dense
    use_locking=self._use_locking).op
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/gen_training_ops.py", line 449, in apply_rms_prop
    use_locking=use_locking, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__
    self._traceback = _extract_stack()

FailedPreconditionError (see above for traceback): Attempting to use uninitialized value b/RMSProp
     [[Node: RMSProp/update_b/ApplyRMSProp = ApplyRMSProp[T=DT_DOUBLE, _class=["loc:@b"], use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](b, b/RMSProp, b/RMSProp_1, RMSProp/update_b/Cast, RMSProp/update_b/Cast_1, RMSProp/update_b/Cast_2, RMSProp/update_b/Cast_3, gradients/add_grad/tuple/control_dependency_1)]]

我无法解释自己这个错误,因为模型已初始化,实际上如果我运行

print sess.run(q_model(x,a))

该模型按预期工作,不会产生任何错误。

编辑:

我的问题与此question不同。我已经意识到了

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

但我也不知道它应该在优化之后执行。

1 个答案:

答案 0 :(得分:1)

您需要输入以下代码:

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

创建这些张贴后:

y_batch , x_batch, a_batch = tf.placeholder("float64",shape=(None,1), name='y'), tf.placeholder("float64",shape=(None,2), name='x'), tf.placeholder("float64",shape=(None,1), name='a')
error = tf.reduce_sum(tf.square(y_batch - q_model(x_batch,a_batch))) / 100.
train = optimizer.minimize(error)

init = tf.initialize_all_variables()
sess = tf.Session()

sess.run(init)

否则,在调用optimiser.minimize方法时添加到图表中的隐藏变量不会被初始化。

同时,对print sess.run(q_model(x,a))的调用有效,因为图的这一部分使用的变量都已初始化。

顺便说一句:使用tf.global_variables_initializer而不是tf.initialize_all_variables

编辑:

要执行选择性初始化,您可以执行以下操作:

with tf.variable_scope("to_be_initialised"):
    train = optimizer.minimize(error)

sess.run(tf.variables_initializer(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='to_be_initialised')))