tensorflow lstm,我怎样才能得到输入数据的损失梯度,而不是变量权重和偏差

时间:2016-12-28 03:27:35

标签: tensorflow gradient lstm

如何在输入数据中获得损失的梯度,而不是变量权重和偏差

lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=0.0) 
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
pred = tf.matmul(outputs[-1], weights['out'] + biases['outs'])

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
self.optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
compute_gradients = optimizer.compute_gradients(cost)
train = optimizer.apply_gradients(compute_gradients)

with tf.Session() as sess:
    sess.run(init)
    fd = {x:batch_x, y:batch_y}
    sess.run(train, feed_dict=fd)

    grad_vals = sess.run([(g,v) for (g,v) in compute_gradients], feed_dict=fd)

我可以计算重量和偏差的梯度,那么我怎么能直接在batch_x上得到梯度?

input_grad = sess.run(tf.gradients(cost, batch_x), feed_dict=fd)

input_grad值为[None]。

1 个答案:

答案 0 :(得分:1)

问题在评论中得到解决。 batch_x应该替换为下面一行中的x:

input_grad = sess.run(tf.gradients(cost, x), feed_dict=fd)