tf.cond在变量上。 tf.global_variables_initializer()

时间:2019-03-18 07:55:26

标签: python tensorflow deep-learning

我在tf.global_variables_initializer()中遇到FailedPreconditionError错误。我将代码的以下部分归零为罪魁祸首:

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

self.global_step将由优化程序在每次迭代时增加1。价值必须改变。所以,这就是我想要的行为。

错误消息:

FailedPreconditionError ...
506         with tf.Session(graph=highgraph) as session:
--> 507             session.run(tf.global_variables_initializer())
...
FailedPreconditionError: Attempting to use uninitialized value global_step
 [[node global_step/read (defined at NML_U/sNeural.py:103)  = Identity[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](global_step)]]

为什么那部分代码是罪魁祸首? 因为,以下代码有效

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step.initialized_value() < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

但是每次都会用初始化值self.global_step(=0)评估条件,这不是预期的行为

此代码也可以正常工作:

def __init__(...):
    ...
    self.global_step = tf.get_variable(initializer=tf.zeros_initializer(), trainable=False, shape=(), name='global_step')
    self.global_step = tf.assign(self.global_step,0.)
    ...
    step_rampup_value = self.step_rampup(self.global_step, self.rampup_length)

def step_rampup(self, global_step, rampup_length):
    result = tf.cond(global_step < rampup_length,
                     lambda: tf.constant(0.0),
                     lambda: tf.constant(1.0))
    return tf.identity(result, name="step_rampup")
session.run(tf.global_variables_initilizer())

但是(也许)这不会再次导致对global_step的依赖,而是依赖于Assign op,它将继续将0分配给self.global_step

我该如何实现这种行为?

1 个答案:

答案 0 :(得分:1)

您没有提供完整的代码,所以我只能猜测您也许是在{em> tf.global_variables_initializer()之前调用__init__() 。实际上,前者不会初始化调用后创建的变量。