将线性回归模型更新为TensorFlow 2.0问题

时间:2020-03-30 01:22:37

标签: python tensorflow machine-learning linear-regression

我编写了将my script从TensorFlow 1.X更新到TensorFow 2.0的脚本,如下所示:

learning_rate = 0.01

# steps of looping through all your data to update the parameters
training_epochs = 100

# the training set
x_train = np.linspace(0, 10, 100)
y_train = x_train + np.random.normal(0,1,100)

w0 = tf.Variable(0.)
w1 = tf.Variable(0.)

def h(x):
    y = w1*x + w0
    return y

def squared_error(y_pred, y_true):
    return 0.5*tf.square(y_pred - y_true)

for epoch in range(training_epochs):
    with tf.GradientTape() as tape:
        y_predicted = h(x_train)
        costF = squared_error(y_predicted, y_train)
    gradients = tape.gradient(costF, [w1,w0])
    w1.assign_sub(gradients[0]*learning_rate)   
    w0.assign_sub(gradients[1]*learning_rate)

print([w0.numpy(), w1.numpy()])

运行上面的脚本时,我得到了结果:

[nan, nan]

但是,如果我按如下所示更改了squared_error函数:

def squared_error(y_pred, y_true):
   return tf.reduce_mean(tf.square(y_pred - y_true))

结果可能像这样:

[0.14498015, 0.97645897]

我不了解这些差异。请帮助我。

0 个答案:

没有答案