Keras没有做出好的预测

时间:2017-03-14 12:32:28

标签: python machine-learning keras prediction

两个月前,我开始与keras合作,以获得在其他软件中使用它的泵模式。

我不知道为什么我获得的模式与真实模式无关的原因。我试过在数据集中建立一些特征(输入),并且还有更多的输入,但它没有办法。 结果如下:

here

其中:

  • 蓝色:数据集(实际数据我试图“近似”)
  • 橙色:预测

数据集是时间序列

Reference vs prediction是包含数据集

的csv文件

以下是代码:

import numpy
import matplotlib.pyplot as plt
import pandas
import math

from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from keras.regularizers import l2, activity_l2


def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in range(len(dataset) - look_back - 1):
        a = dataset[i:(i + look_back), 0:4] 
        dataX.append(a)
        dataY.append(dataset[i + look_back, 4]) 
    return numpy.array(dataX), numpy.array(dataY)



# fix random seed for reproducibility
seed=7
numpy.random.seed(seed)


# load dataset
dataframe = pandas.read_csv('datos_horarios.csv', engine='python') 
dataset = dataframe.values


# normalizar el dataset
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)

#split data into train data and test data
train_size = int(len(dataset) * 0.67) 
test_size = len(dataset) - train_size
train, test = dataset[0:train_size, :], dataset[train_size:len(dataset), :]

# reshape to X=t y Y=t+1
look_back = 1
trainX, trainY = create_dataset(train, look_back)  
testX, testY = create_dataset(test, look_back)

# reshape inputs to be [samples, time steps, features]
trainX = numpy.reshape(trainX, (trainX.shape[0], look_back, 4))
testX = numpy.reshape(testX, (testX.shape[0], look_back, 4))
# create and adjust LSTM network

model = Sequential()
model.add(Dropout(0.3, input_shape=(look_back,4))) 
model.add(LSTM(6, input_shape=(look_back,4), W_regularizer=l2(0.001))) 
model.add(Dense(10)) 
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam' ,momentum=0.99)
history= model.fit(trainX, trainY,validation_split=0.33, nb_epoch=250, batch_size=32)


# Plot
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epochs')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()

# make predictions
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
print(trainPredict)

numero_inputs=4
inp=numero_inputs-1
# Get something which has as many features as dataset
trainPredict_extended = numpy.zeros((len(trainPredict),numero_inputs+1))
# Put the predictions there
trainPredict_extended[:,inp+1] = trainPredict[:,0]
# Inverse transform it and select the 3rd column.
trainPredict = scaler.inverse_transform(trainPredict_extended)[:,inp+1]

# Get something which has as many features as dataset
testPredict_extended = numpy.zeros((len(testPredict),numero_inputs+1))
# Put the predictions there
testPredict_extended[:,inp+1] = testPredict[:,0]
# Inverse transform it and select the 3rd column.
testPredict = scaler.inverse_transform(testPredict_extended)[:,inp+1]

trainY_extended = numpy.zeros((len(trainY),numero_inputs+1))
trainY_extended[:,inp+1]=trainY
trainY=scaler.inverse_transform(trainY_extended)[:,inp+1]

testY_extended = numpy.zeros((len(testY),numero_inputs+1))
testY_extended[:,inp+1]=testY
testY=scaler.inverse_transform(testY_extended)[:,inp+1]
# Calcular error medio cuadratico
trainScore = math.sqrt(mean_squared_error(trainY, trainPredict))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY, testPredict))
print('Test Score: %.2f RMSE' % (testScore))
# add train predictions to the plot

trainPredictPlot = numpy.empty_like(dataset)
trainPredictPlot[:, :] = numpy.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, 0] = trainPredict

# add test predictions to the plot
testPredictPlot = numpy.empty_like(dataset)
testPredictPlot[:, :] = numpy.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, 0] = testPredict

# Plot real data and training and test predictions
serie,=plt.plot(scaler.inverse_transform(dataset)[:,numero_inputs])  #invierto muestras en formato (0,1) a valores reales y los ploteo
entrenamiento,=plt.plot(trainPredictPlot[:,0],linestyle='--')  #ploteo las predicciones de entrenamiento
prediccion_test,=plt.plot(testPredictPlot[:,0],linestyle='--')
plt.ylabel(' (m3)')
plt.xlabel('h')
plt.legend([serie,entrenamiento,prediccion_test],['Time series','Training','Prediction'], loc='upper right')
plt.show()

有关如何解决此问题的任何想法?或者,至少,问题是什么?

栏目输入:

  1. 一天中的时间(每半小时),转换为十进制。
  2. 星期几(1-Monday ... 7-sunday)
  3. 一年中的一个月(1-12)
  4. 每月的某一天(1-31)
  5. 输出:

    1. 抽水(m3)
    2. 修改 使用@a_guest的代码,并更改一些参数,例如时期数或history值,结果非常好:

      Relative deviation {{3}}

1 个答案:

答案 0 :(得分:3)

不是答案,但我在这里分享了我获得以下结果的代码:

Predictions

Relative deviation

注意,网络参数是任意选择的,即未优化。也就是说,通过改变这些参数,您最有可能获得更好的结果。同样改变history(或您的look_back)的值可能会对预测质量产生重大影响。

from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
import numpy

numpy.random.seed(12)

history = 96


def generate_data():
    data = numpy.loadtxt('datos_horarios.csv', delimiter=',', dtype=float)
    # Normalize data.
    data[:, -1] /= numpy.max(data[:, -1])
    train_test_data = []
    for i in xrange(data.shape[0] - history - 1):
        # Include the reference value here, will be extracted later.
        train_test_data.append(data[i:i+history+1, -1].flatten())
    return numpy.array(train_test_data)

train_test_data = generate_data()
# Shuffle data set in order to randomly select training and test data.
numpy.random.shuffle(train_test_data)

n_samples = train_test_data.shape[0]
n_train_samples = int(0.8 * n_samples)

train_data = train_test_data[:n_train_samples, :-1]
train_data_reference = train_test_data[:n_train_samples, -1][:, None]

test_data = train_test_data[n_train_samples:, :-1]
test_data_reference = train_test_data[n_train_samples:, -1]

print 'Tranining data: ', train_data
print 'Reference values: ', train_data_reference


model = Sequential()
model.add(Dense(history, input_dim=history, activation='sigmoid'))
model.add(Dense(history/2, activation='sigmoid'))
model.add(Dense(history/4, activation='sigmoid'))
model.add(Dense(history/8, activation='sigmoid'))
model.add(Dense(history/16, activation='sigmoid'))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='mean_squared_error', metrics=['accuracy'])
model.summary()

model.fit(train_data, train_data_reference, shuffle=True, nb_epoch=200, batch_size=10)

# Use the complete data set to see the network performance.
# Regenerate data set because it was shuffled before.
train_test_data = generate_data()
test_data_predicted = model.predict(train_test_data[:, :-1]).flatten()
test_data_reference = train_test_data[:, -1]

relative_deviation = test_data_predicted/test_data_reference - 1.0
print 'Relative deviation: ', relative_deviation

plt.figure()
plt.plot(range(len(test_data_reference)), test_data_reference, 'b-', label='reference')
plt.plot(range(len(test_data_predicted)), test_data_predicted, 'r--', label='predicted')
plt.xlabel('test case #')
plt.ylabel('predictions')
plt.title('Reference values vs predicted values')
plt.legend()

plt.figure()
plt.plot(range(len(test_data_predicted)), relative_deviation, 'bx', label='relative deviation')
plt.xlabel('test case #')
plt.ylabel('relative deviation')
plt.title('Relative deviation of predicted values (predicted / reference - 1)')
plt.legend()

plt.show()