如何在张量流渴望执行中计算张量的梯度

时间:2018-07-23 16:09:43

标签: tensorflow gradient-descent

我希望在tensorflow急切执行模式下计算张量的梯度,但梯度始终为None。 例如,在下面的代码中,我希望计算梯度dX。

from __future__ import absolute_import, division, print_function
import tensorflow as tf

tf.enable_eager_execution()
tfe = tf.contrib.eager

import os
os.environ["CUDA_VISIBLE_DEVICES"]="6"
import warnings
warnings.filterwarnings('ignore')

NUM_EXAMPLES = 1000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise

def loss(weights, biases):
  error = training_inputs * weights + biases - training_outputs
  return tf.reduce_mean(tf.square(error))

train_steps = 200
learning_rate = 0.01
W = tfe.Variable(5.)
B = tfe.Variable(10.)

print("Initial loss: {:.3f}".format(loss(W, B)))

for i in range(train_steps):
  dW, dB = 0, 0

  with tf.GradientTape() as tape:
    loss_value = loss(W, B)
    x = loss_value * 2
  dW, dB, dX = tape.gradient(loss_value, [W, B, x])

  W.assign_sub(dW * learning_rate)
  B.assign_sub(dB * learning_rate)
  if i % 20 == 0:
    print("dX:", dX)

这是结果。 dX为“无”。

Initial loss: 68.959
dX:None, dW:tf.Tensor(3.6713886, shape=(), dtype=float32), dB:tf.Tensor(16.035084, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(2.528957, shape=(), dtype=float32), dB:tf.Tensor(10.7087, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(1.7416952, shape=(), dtype=float32), dB:tf.Tensor(7.151658, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(1.1992912, shape=(), dtype=float32), dB:tf.Tensor(4.776187, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.82566154, shape=(), dtype=float32), dB:tf.Tensor(3.1897786, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.5683386, shape=(), dtype=float32), dB:tf.Tensor(2.1303184, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.39114872, shape=(), dtype=float32), dB:tf.Tensor(1.4227662, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.26915967, shape=(), dtype=float32), dB:tf.Tensor(0.9502286, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.18518688, shape=(), dtype=float32), dB:tf.Tensor(0.63464075, shape=(), dtype=float32)
dX:None, dW:tf.Tensor(0.12739493, shape=(), dtype=float32), dB:tf.Tensor(0.42386985, shape=(), dtype=float32)

1 个答案:

答案 0 :(得分:0)

仅意识到loss_value不依赖x,因此dX为None。