平均误差不随时期数减少?

时间:2017-07-19 09:41:43

标签: machine-learning tensorflow gradient-descent

这是使用tensorflow实现Batch Gradient Descent。

当我运行此代码时,MSE保持不变。

import tensorflow as tf
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.datasets import fetch_california_housing

housing=fetch_california_housing()

std=StandardScaler()
scaled_housing_data=std.fit_transform(housing.data)

m,n=scaled_housing_data.shape
scaled_housing_data.shape

scaled_housing_data_with_bias=np.c_[np.ones((m,1)),scaled_housing_data]

n_epochs=1000
n_learning_rate=0.01

x=tf.constant(scaled_housing_data_with_bias,dtype=tf.float32)
y=tf.constant(housing.target.reshape(-1,1),dtype=tf.float32)
theta=tf.Variable(tf.random_uniform([n+1,1],-1.0,1.0,seed=42))
y_pred=tf.matmul(x,theta)

error=y_pred-y
mse=tf.reduce_mean(tf.square(error))
gradients=2/m*tf.matmul(tf.transpose(x),error)

training_op=tf.assign(theta,theta-n_learning_rate*gradients)

init=tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)

    for epoch in range(n_epochs):
        if epoch % 100 == 0:
            print("Epoch", epoch, "MSE =", mse.eval())
        sess.run(training_op)

    best_theta = theta.eval()

输出

('Epoch', 0, 'MSE =', 2.7544272)
('Epoch', 100, 'MSE =', 2.7544272)
('Epoch', 200, 'MSE =', 2.7544272)
('Epoch', 300, 'MSE =', 2.7544272)
('Epoch', 400, 'MSE =', 2.7544272)
('Epoch', 500, 'MSE =', 2.7544272)
('Epoch', 600, 'MSE =', 2.7544272)
('Epoch', 700, 'MSE =', 2.7544272)
('Epoch', 800, 'MSE =', 2.7544272)
('Epoch', 900, 'MSE =', 2.7544272)

无论如何,均方误差(MSE)保持不变。 请帮忙。

2 个答案:

答案 0 :(得分:0)

也许你应该再试一次。我只是复制你的代码并运行,损失正确减少。
输出:

Epoch 0 MSE = 2.75443
Epoch 100 MSE = 0.632222
Epoch 200 MSE = 0.57278
Epoch 300 MSE = 0.558501
Epoch 400 MSE = 0.54907
Epoch 500 MSE = 0.542288
Epoch 600 MSE = 0.537379
Epoch 700 MSE = 0.533822
Epoch 800 MSE = 0.531242
Epoch 900 MSE = 0.529371

答案 1 :(得分:0)

如果您的MSE相同,那么这意味着您的theta没有得到更新,这意味着渐变为零。更改此行并检查:

gradients=2.0/m*tf.matmul(tf.transpose(x),error) # integer division (2/m) causes zero