在训练之后用Tensorflow神经网络预测值

时间:2018-01-20 07:43:08

标签: python tensorflow

所以,我设法使用Tensorflow训练神经网络。以下代码执行:

  • 阅读excel文件(数据集)
  • 比例数据
  • 构建并运行神经网络

代码:

import tensorflow as tf
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt

#read file
data = pd.read_excel("data.xlsx")

# Make data a np.array
data = data.values

temp_data = []


for i in range(0, len(data)):
    date = data[i][0]
    time = data[i][1]
    temperature = data[i][2]
    dewPoint = data[i][3]
    dayOfWeek = data[i][4]
    apparentTemperature = data[i][5]
    kwh = data[i][6]

    temp_data.append([kwh, date.year, date.month, date.day, time, dewPoint, temperature, apparentTemperature, dayOfWeek])


data = temp_data

#split dataset
data_train, data_test = train_test_split(data, test_size=0.2)

# Scale data
scaler = MinMaxScaler(feature_range=(-1, 1))
scaler.fit(data_train)
data_train = scaler.transform(data_train)
data_test = scaler.transform(data_test)

# Build X and y
X_train = data_train[:, 1:]
y_train = data_train[:, 0]
X_test = data_test[:, 1:]
y_test = data_test[:, 0]

# Number of stocks in training data
n_time_dimensions = X_train.shape[1]

# Neurons
n_neurons_1 = 1024
n_neurons_2 = 512
n_neurons_3 = 256
n_neurons_4 = 128

# Session
net = tf.InteractiveSession()

# Placeholder
X = tf.placeholder(dtype=tf.float32, shape=[None, n_time_dimensions])
Y = tf.placeholder(dtype=tf.float32, shape=[None])

# Initializers
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg", distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()

# Hidden weights
W_hidden_1 = tf.Variable(weight_initializer([n_time_dimensions, n_neurons_1]))
bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1]))
W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2]))
bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2]))
W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3]))
bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3]))
W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4]))
bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4]))

# Output weights
W_out = tf.Variable(weight_initializer([n_neurons_4, 1]))
bias_out = tf.Variable(bias_initializer([1]))

# Hidden layer
hidden_1 = tf.nn.relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1))
hidden_2 = tf.nn.relu(tf.add(tf.matmul(hidden_1, W_hidden_2), bias_hidden_2))
hidden_3 = tf.nn.relu(tf.add(tf.matmul(hidden_2, W_hidden_3), bias_hidden_3))
hidden_4 = tf.nn.relu(tf.add(tf.matmul(hidden_3, W_hidden_4), bias_hidden_4))

# Output layer (transpose!)
out = tf.transpose(tf.add(tf.matmul(hidden_4, W_out), bias_out))

# Cost function
mse = tf.reduce_mean(tf.squared_difference(out, Y))

# Optimizer
opt = tf.train.AdamOptimizer().minimize(mse)

# Init
net.run(tf.global_variables_initializer())

# Setup plot
plt.ion()
fig = plt.figure()
ax1 = fig.add_subplot(111)
line1, = ax1.plot(y_test)
line2, = ax1.plot(y_test * 0.5)
plt.show()

# Fit neural net
batch_size = 256
mse_train = []
mse_test = []

# Run
epochs = 10
for e in range(epochs):

    # Shuffle training data
    shuffle_indices = np.random.permutation(np.arange(len(y_train)))
    X_train = X_train[shuffle_indices]
    y_train = y_train[shuffle_indices]

    # Minibatch training
    for i in range(0, len(y_train) // batch_size):
        start = i * batch_size
        batch_x = X_train[start:start + batch_size]
        batch_y = y_train[start:start + batch_size]
        # Run optimizer with batch
        net.run(opt, feed_dict={X: batch_x, Y: batch_y})

        # Show progress
        if np.mod(i, 50) == 0:
            # MSE train and test
            mse_train.append(net.run(mse, feed_dict={X: X_train, Y: y_train}))
            mse_test.append(net.run(mse, feed_dict={X: X_test, Y: y_test}))
            print('Train Error: ' + str(round(100.0 * mse_train[-1], 2)) + ' %')
            print('Test Error: ' + str(round(100.0 * mse_test[-1], 2)) + ' %')
            # Prediction
            pred = net.run(out, feed_dict={X: X_test})
            line2.set_ydata(pred)
            plt.title('Epoch ' + str(e) + ', Batch ' + str(i))
            plt.pause(0.01)

所以,现在我从另一个excel文件中读取以使用以下代码预测新输入X:

#read file
data_predict = pd.read_excel("predict.xlsx")

# Make data a np.array
data_predict = data_predict.values

temp_data = []


for i in range(0, len(data_predict)):
    date = data_predict[i][0]
    time = data_predict[i][1]
    temperature = data_predict[i][2]
    dewPoint = data_predict[i][3]
    dayOfWeek = data_predict[i][4]
    apparentTemperature = data_predict[i][5]

    temp_data.append([date.year, date.month, date.day, time, dewPoint, temperature, apparentTemperature, dayOfWeek])

data_predict = temp_data

我不明白的是如何预测给定X的新输出Y.我遇到了许多不同的代码解决方案,但似乎没有一个对我有用:要么因为它们不能,要么我我对张量流语法不够熟悉(我相信它是后者) *注意:我尝试了tf.run()和tf.equal()方法的不同变体,但我缺少一些必需的参数。

1 个答案:

答案 0 :(得分:1)

为此,您应将测试数据提供给经过培训的网络,网络输出将是您预测的标签。

在"#Prediction"之后,您已经在培训过程中每50步完成一次。对于测试数据再次执行此操作,如下所示:

for i in range(0, len(data_predict) // batch_size):
    start = i * batch_size
    batch_x = data_predict[start:start + batch_size] 
    pred = net.run(out, feed_dict={X: batch_x}) 

现在pred是一个形状为[batch_size, 1]的张量,包括您预测的测试数据标签。