强制依赖变量更新

时间:2016-02-29 07:04:38

标签: python tensorflow

假设我有一些变量f的函数x

x = tf.Variable(1.0)
fx = x*x

以及更新x的操作:

new_x = x.assign(2.0)

我想获得更新f产生的x的价值。我原以为

with tf.control_dependencies([new_x,]):
    new_fx = tf.identity(fx)    

会强制new_fx依赖于更新new_x,但似乎并非如此:

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)

# prints 1.0, expected 4.0
print "new fx", sess.run(new_fx)

是否有其他方法可以定义fx的更新值?

显然,我可以通过编写类似new_fx = new_x * new_x的内容来创建一个新的独立副本,但这会破坏图表大小,还需要访问我fx的定义,我更喜欢当黑盒子。

编辑:为了激发这一点,这里是我想写的代码的草图:

# Hamiltonian Monte Carlo update, simplified
def hmc_step(x, momentum, logpdf, n_steps=50): 
    # x and momentum are Variables
    # logpdf is a Tensor with potentially complicated dependence on x

    grad = tf.gradients(logpdf, x)[0]

    # initial position        
    new_x = x

    for i in range(n_steps):
        # update position
        new_x = x.assign(new_x + momentum)

        # update momentum using gradient at *current* position
        with tf.control_dependencies([new_x]):
             momentum = momentum + grad # DOESN'T WORK

        # DOES WORK BUT IS UGLY
        # new_logpdf = define_logpdf(new_x)
        # new_grad = tf.gradients(new_logpdf, new_x)[0]
        # momentum = momentum + new_grad

    # (do some stuff to accept/reject the new x)
    # ....

    return new_x

定义一个新的logpdf副本并在每次循环时重新启动渐变感觉真的很不优雅:它需要访问define_logpdf()并将图形大小放大50倍。有没有更好的方法来执行此操作(除非相当于theano.scan)?

1 个答案:

答案 0 :(得分:2)

with tf.control_dependencies([op])块强制对op的控制依赖性与块内的其他操作创建。在您的情况下,x*x是在外部创建的,而tf.identity只是获取旧值。这是你想要的:

with tf.control_dependencies([new_x,]):
  new_fx = x*x