为什么K.gradients对于输入损耗的梯度不返回任何值

时间:2020-04-16 22:43:44

标签: tensorflow machine-learning keras

我想知道为什么在下面的代码中我的应届毕业生一无所获:

import tensorflow.keras.losses as losses
loss = losses.squared_hinge(y_true, y_pred)

from tensorflow.keras import backend as K
grads = K.gradients(loss, CNN_model.input)[0]
iterate = K.function([CNN_model.input], [loss, grads])

我的CNN_model.input是: <tf.Tensor 'conv2d_3_input:0' shape=(?, 28, 28, 1) dtype=float32>

我的损失是: <tf.Tensor 'Mean_3:0' shape=(1,) dtype=float64>

注意:如果重要,我会将SVM的预测输出作为y_pred传递给我的应用程序。

1 个答案:

答案 0 :(得分:2)

据我以前的经验,Tensorflow需要使用GradientTape来记录某个变量的活动并计算其梯度。在您的情况下,应该是这样的:

x = np.random.rand(10) #your input variable
x = tf.Variable(x) #to be evaluated by GradientTape the input should be a tensor
with tf.GradientTape() as tape:
    tape.watch(x) #with this method you can observe your variable
    proba = model(x) #get the prediction of the input
    loss = your_loss_function(y_true, proba) #compute the loss

gradient = tape.gradient(loss, x) #compute the gradients, this must be done outside the recording