我想使用Tensorflow来计算函数的梯度。但是,如果我使用tf.gradients
函数,它将返回一个渐变列表。如何为批次的每个点返回列表?
# in a tensorflow graph I have the following code
tf_x = tf.placeholder(dtype=tf.float32, shape=(None,N_in), name='x')
tf_net #... conveniently defined neural network
tf_y = tf.placeholder(dtype=tf.float32, shape=(None,1), name='y')
tf_cost = (tf_net(tf_x) - tf_y)**2 # this should have length N_samples because I did not apply a tf.reduce_mean
tf_cost_gradients = tf.gradients(tf_cost,tf_net.trainable_weights)
如果我们在张量流会话中运行它,
# suppose myx = np.random.randn(N_samples,N_in) and myy conveniently chosen
feed = {tf_x:myx, tx_y:myy}
sess.run(tf_cost_gradients,feed)
我只得到一个列表,而不是每个样品的列表。我可以使用
for i in len(myx):
feed = {tf_x:myx[i], tx_y:myy[i]}
sess.run(tf_cost_gradients,feed)
但这太慢了!我能做什么?谢谢
答案 0 :(得分:0)
尽管在tf.gradients中有一个“ aggregation_method”参数,但要获得各个渐变并不容易。
++
请参阅以下主题:
https://github.com/tensorflow/tensorflow/issues/15760 https://github.com/tensorflow/tensorflow/issues/4897
在其中一个线程中(#4897),伊恩·古德费洛(Ian Goodfellow)提出以下建议以加快单个梯度计算的速度:
aggregation_method: Specifies the method used to combine gradient terms.