如何加快许多`tf.gradients`操作

时间:2019-01-18 21:41:21

标签: tensorflow

我想计算向量值函数f(x)相对于x中标量变量TensorFlow的梯度。稍作搜索就告诉我要循环执行。但是,如果f(x)很大,则编译速度很慢。有什么聪明的方法吗?

在下面的代码中,我已经计算了tf.reduce_sum(f(x))的梯度,因此f(x)相对于x的梯度应该已经(内部)计算出来了。我当时想恢复中间梯度而无需调用额外的tf.gradients(在pytorch中这很容易,因为我可以使张量具有.grad属性)

import tensorflow as tf
import numpy as np
A = np.random.rand(5000)
x = tf.Variable(1.0)
f = A*x
loss = tf.reduce_sum(f)

grad_loss = tf.gradients(loss, x) # gradients of df/dx should be available by now in theory; compilation is fast
grad_f = [None]*5000
# the iteration is slow, and becomes slower as it proceeds (why?)
for i in range(5000):
    print(i)
    grad_f[i] = tf.gradients(f[i],x)[0] 

0 个答案:

没有答案