张量流线性回归得到明显的均方误差

时间:2018-04-14 04:42:59

标签: python tensorflow regression linear-regression

我是tensorflow的新手,我正在尝试为回归实现一个简单的前馈网络,仅用于学习目的。完整的可执行代码如下。

回归均方误差约为6,这是非常大的。这有点出乎意料,因为回归函数是线性和简单的2 * x + y,我希望有更好的性能。

  

我正在寻求帮助来检查代码中是否有任何错误。我仔细检查了矩阵的尺寸,这应该是好的,但我可能会误解某些东西,所以网络或会话没有正确配置(比如,我应该多次运行训练会话,而不是只有一次(下面的代码由#TRAINING#包围)?我在一些例子中看到他们逐个输入数据,并逐步运行培训。我只运行一次培训并输入所有数据。

如果代码是好的,也许这是一个建模问题,但我真的不希望使用复杂的网络进行这种简单的回归。

import tensorflow as tf
import numpy as np
from sklearn.metrics import mean_squared_error

# inputs are points from a 100x100 grid in domain [-2,2]x[-2,2], total 10000 points
lsp = np.linspace(-2,2,100)
gridx,gridy = np.meshgrid(lsp,lsp)
inputs = np.dstack((gridx,gridy))
inputs = inputs.reshape(-1,inputs.shape[-1]) # reshpaes the grid into a 10000x2 matrix
feature_size = inputs.shape[1] # feature_size is 2, features are the 2D coordinates of each point
input_size = inputs.shape[0] # input_size is 10000

# a simple function f(x)=2*x[0]+x[1] to regress
f = lambda x: 2 * x[0] + x[1]
label_size = 1
labels = f(inputs.transpose()).reshape(-1,1) # reshapes labels as a column vector

ph_inputs = tf.placeholder(tf.float32, shape=(None, feature_size), name='inputs')
ph_labels = tf.placeholder(tf.float32, shape=(None, label_size), name='labels')

# just one hidden layer with 16 units
hid1_size = 16
w1 = tf.Variable(tf.random_normal([hid1_size, feature_size], stddev=0.01), name='w1')
b1 = tf.Variable(tf.random_normal([hid1_size, label_size]), name='b1')
y1 = tf.nn.relu(tf.add(tf.matmul(w1, tf.transpose(ph_inputs)), b1))

# the output layer
wo = tf.Variable(tf.random_normal([label_size, hid1_size], stddev=0.01), name='wo')
bo = tf.Variable(tf.random_normal([label_size, label_size]), name='bo')
yo = tf.transpose(tf.add(tf.matmul(wo, y1), bo))

# defines optimizer and predictor
lr = tf.placeholder(tf.float32, shape=(), name='learning_rate')
loss = tf.losses.mean_squared_error(ph_labels,yo)
optimizer = tf.train.GradientDescentOptimizer(lr).minimize(loss)
predictor = tf.identity(yo)

# TRAINING 
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
_, c = sess.run([optimizer, loss], feed_dict={lr:0.05, ph_inputs: inputs, ph_labels: labels})
# TRAINING 

# gets the regression results
predictions = np.zeros((input_size,1))
for i in range(input_size):
    predictions[i] = sess.run(predictor, feed_dict={ph_inputs: inputs[i, None]}).squeeze()

# prints regression MSE
print(mean_squared_error(predictions, labels))

1 个答案:

答案 0 :(得分:4)

你是对的,你自己理解这个问题。

事实上,问题是您只运行一次优化步骤。因此,您只需对网络参数执行一次更新步骤,因此费用不会降低。

我刚刚更改了代码的培训课程,以使其按预期工作(100个培训步骤):

# TRAINING
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(100):
    _, c = sess.run(
        [optimizer, loss],
        feed_dict={
            lr: 0.05,
            ph_inputs: inputs,
            ph_labels: labels
        })
    print("Train step {} loss value {}".format(i, c))
# TRAINING

在培训步骤结束时我去了:

  

训练步骤99损失值0.04462708160281181

     

0.044106700712455045