Tensorflow-线性回归:无法正确执行绘图

时间:2018-07-30 21:59:35

标签: python tensorflow machine-learning linear-regression

我一直在使用Tensorflow处理线性回归问题。我得到的曲线pred_y平坦。我应该如何用训练的观察示例拟合曲线?

这是我的张量流代码:

# coding: utf-8

# In[146]:


import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd


# In[147]:


train_features = pd.read_csv("training_set_X.csv", delimiter=',').as_matrix()
train_observations = pd.read_csv("training_set_Y.csv", delimiter=',').as_matrix()

print("Training features: ")
train_features


# In[148]:


print("Training observations: ")
train_observations


# In[149]:


print("Shape of training features = ", train_features.shape)
print("Shape of training observations = ", train_observations.shape)


# In[150]:


# Normalization of training data.
train_features_stddev_arr = np.std(train_features, axis=0)
train_features_mean_arr = np.mean(train_features, axis=0)
normalized_train_features = (train_features - train_features_mean_arr) / train_features_stddev_arr


# In[151]:


print("Training features: Standard deviation....")
train_features_stddev_arr


# In[152]:


print("Training featues: Mean....")
train_features_mean_arr


# In[153]:


print("Normalized training features....")
normalized_train_features


# In[154]:


# Layer parameters.
n_nodes_h11 = 5
n_nodes_h12 = 5
n_nodes_h13 = 3
no_features = 17
learning_rate = 0.01
epochs = 200


# In[155]:


cost_history = []


# In[156]:


X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')


# In[157]:


# Defining weights and biases.
first_weight = tf.Variable(tf.random_normal([no_features, n_nodes_h11], stddev=np.sqrt(2/no_features)))
second_weight = tf.Variable(tf.random_normal([n_nodes_h11, n_nodes_h12], stddev=np.sqrt(2/n_nodes_h11)))
third_weight = tf.Variable(tf.random_normal([n_nodes_h12, n_nodes_h13], stddev=np.sqrt(2/n_nodes_h12)))
output_weight = tf.Variable(tf.random_normal([n_nodes_h13, 1], stddev=np.sqrt(2/n_nodes_h13)))


# In[158]:


first_bias = tf.Variable(tf.random_uniform([n_nodes_h11], -1.0, 1.0))
second_bias = tf.Variable(tf.random_uniform([n_nodes_h12], -1.0, 1.0))
third_bias = tf.Variable(tf.random_uniform([n_nodes_h13], -1.0, 1.0))
output_bias = tf.Variable(tf.random_uniform([1], -1.0, 1.0))


# In[159]:


# Defining activations of each layer.
first = tf.sigmoid(tf.matmul(X, first_weight) + first_bias)
second = tf.sigmoid(tf.matmul(first, second_weight) + second_bias)
third = tf.sigmoid(tf.matmul(second, third_weight) + third_bias)
output = tf.matmul(third, output_weight) + output_bias


# In[182]:


# Using Mean Squared Error
cost = tf.reduce_mean(tf.pow(output - Y, 2)) / (2 * train_features.shape[0])


# In[183]:


# Using Gradient Descent algorithm
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)


# In[184]:


init = tf.global_variables_initializer()


# In[194]:


# Running the network.
with tf.Session() as sess:
    sess.run(init)

    for step in np.arange(epochs):
        sess.run(optimizer, feed_dict={X:normalized_train_features, Y:train_observations})
        cost_history.append(sess.run(cost, feed_dict={X:normalized_train_features, Y:train_observations}))

    pred_y = sess.run(output, feed_dict={X:normalized_train_features})
    plt.plot(range(len(pred_y)), pred_y)
    plt.plot(range(len(train_observations)), train_observations)


# In[195]:


plt.show()

训练特征的形状=(967,17),训练观察结果的形状=(967,1)

我观察到的直线(pred_y)是由于pred_y值生成为较大的负数。并且train_observation值已经是正值。

如果有人可以帮助我解决这个问题,那就太好了。我不希望pred_y行是笔直的。我想我做错了。如果有人可以指出我的错误,那就太好了。谢谢!

解决方案1。

您具有17维特征,因此很难在不降低维数的情况下绘制有意义的曲线。因此,您不能期望代码中包含有意义的图。

解决方案2。

@lincr解决方案

1 个答案:

答案 0 :(得分:1)

您在这里使用了错误的丢失功能。

您要使用的是mean squared error,它应该是

tf.reduce_sum(tf.pow(output - Y, 2)/train_features.shape[0])

如果您想使用tf.reduce_mean,应该是

tf.reduece_mean(tf.squared_difference(output, Y))

请注意,reduce_sum中的除法运算已经执行了平均(均值)运算。