Tensorflow,IpyNotebook:绘制线性回归线

时间:2017-06-29 03:19:40

标签: python matplotlib tensorflow linear-regression

此代码,使用tensorflow进行线性回归,使用Jupyter Notebook,python-3完成。

引用的代码 here

我的csv数据包含两个col:Height&的SoC。 我想在图表上绘制我的所有数据点,X轴为高度,Y轴为SoC,然后绘制我从模型得到的最佳拟合线(如下面的代码所示)。

SoC的值范围为0到100,高度值的范围为0到1

高度和SoC都是Float。

我可以绘制的当前图表(在下面的代码中)看起来不像我想要的那样。

如何绘制此特定图表?提前谢谢!

代码:

import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
rng = np.random

from numpy import genfromtxt
from sklearn.datasets import load_boston

# Parameters
learning_rate = 0.0001
training_epochs = 1000
display_step = 50
n_samples = 222

X = tf.placeholder("float") # create symbolic variables
Y = tf.placeholder("float")

filename_queue = tf.train.string_input_producer(["battdata.csv"],shuffle=False)
reader = tf.TextLineReader() # skip_header_lines=1 if csv has headers
key, value = reader.read(filename_queue)

# Default values, in case of empty columns. Also specifies the type of the
# decoded result.
record_defaults = [[1.], [1.]]
height, soc= tf.decode_csv(
    value, record_defaults=record_defaults)
features = tf.stack([height])

# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")

# Construct a linear model
pred_soc = tf.add(tf.multiply(height, W), b) # XW + b <- y = mx + b  where W is gradient, b is intercept

# Mean squared error
cost = tf.reduce_sum(tf.pow(pred_soc-soc, 2))/(2*n_samples)

# Gradient descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.global_variables_initializer()

with tf.Session() as sess:
# Start populating the filename queue.
    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)
    sess.run(init)

     # Fit all training data
    for epoch in range(training_epochs):
        _, cost_value = sess.run([optimizer,cost])

        #Display logs per epoch step
    if (epoch+1) % display_step == 0:
        c = sess.run(cost)
        print( "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(c), \
            "W=", sess.run(W), "b=", sess.run(b))

    print("Optimization Finished!")
    training_cost = sess.run(cost)
    print ("Training cost=", training_cost, "W=", sess.run(W), "b=", sess.run(b), '\n')

#Plot data after completing training
    train_X = []
    train_Y = []
    for i in range(n_samples): #Your input data size to loop through once
        X, Y = sess.run([height, pred_soc]) # Call pred, to get the prediction with the updated weights
        train_X.append(X)
        train_Y.append(Y)

    #Graphic display

    plt.plot(train_X, train_Y, 'ro', label='Original data')
    plt.ylabel("SoC")
    plt.xlabel("Height")
    plt.axis([0, 1, 0, 100])
    plt.plot(train_X, train_Y, linewidth=2.0)
    plt.legend()
    plt.show()

    coord.request_stop()
    coord.join(threads)

1 个答案:

答案 0 :(得分:1)

不明白为什么你说现在的情节是“不看”&#39;你想要的方式。enter image description here

由于相同的输入值被映射到多个输出,因此您只能使用线性回归得到一个接近其平均值的表示。