采取梯度时的张量流2.0;错误说没有为任何变量提供梯度

时间:2019-10-03 20:59:59

标签: tensorflow

嘿,我从张量流的旧版本切换而来,我们考虑一个简单的线性回归模型

import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
dtype = "float32"

# define my model here
model = keras.Sequential([
    keras.layers.Dense(2,name='l1'),
    keras.layers.Dense(128, activation='relu',name='l2'),
    keras.layers.Dense(1)
])



# create the train data
x_train = np.asarray([[2,3],[6,7],[1,5],[4,6],[10,-1],[0,0],[5,6],
[8,9],[4.5,6.2],[1,1],[0.3,0.2]]).astype(dtype)

# true weights and bias
w_train = np.asarray([[2,1]]).astype(dtype)
b = np.asarray([[-3]]).astype(dtype)

# create response 
y_train = np.dot(x_train, w_train.T) + b


# do prediction and define loss
predictions = model(x_train)
loss = lambda: tf.keras.losses.mse(predictions, y_train)

optimizer = tf.keras.optimizers.Adam()
opt_op = optimizer.minimize(loss,var_list = model.trainable_weights)

现在它会引发错误状态ValueError: No gradients provided for any variable: ['sequential_1/l1/kernel:0', 'sequential_1/l1/bias:0', 'sequential_1/l2/kernel:0', 'sequential_1/l2/bias:0', 'sequential_1/dense_1/kernel:0', 'sequential_1/dense_1/bias:0'].

这太奇怪了,因为我将神经网络作为可训练变量,并且它在正向过程中已通过。我看了教程,但看起来所有新的实现都需要手动调用渐变函数并将渐变应用于优化器。

较旧的方法通过定义操作可以自动进行处理。我有点想继续使用这种方式。有帮助吗?

1 个答案:

答案 0 :(得分:0)

在tensorflow 2.0中,您需要使用tf.GradientTape上下文来计算梯度,然后将梯度应用于模型变量。

with tf.GradientTape() as t:
    # do prediction and define loss
    predictions = model(x_train)
    loss = tf.keras.losses.mse(predictions, y_train)
    grad = t.gradient(loss, model.trainable_variables)
    optimizer = tf.keras.optimizers.Adam()
    optimizer.apply_gradients(zip(grad, model.trainable_variables))