Keras RNN模型不会产生我的正弦波的其余部分,而是一条平坦的线

时间:2019-02-27 14:22:54

标签: python machine-learning keras recurrent-neural-network

我正在尝试在Keras中实现有状态RNN,该模型将在1到-1之间建模一个正弦波(这样,我就不必担心进行归一化了)。只是和Keras一起玩,所以可能有多余的代码不需要,但是因为我在尝试不同的事情而已。我以为可以在30个步骤的窗口中使用它,但是我修改了代码,因此可以尝试更改窗口大小以查看它是否对预测有所影响。更改模型以包括有状态和可变的预测长度后,除非执行70多次以上,否则我只会得到一条平线。

我的模型和数据形状正确还是错过了什么?

我必须从Jupyter Notebook单元中复制粘贴内容,这样我就知道了

这是我用来预测n = 100步和70 epocs的窗口的图形。

Blue line is test data and orange is the prediction

from keras.layers.core import Dense, Activation, Dropout
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from keras.callbacks import EarlyStopping
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
%matplotlib qt

x = np.linspace(0, 10, 1000, endpoint=True)#[np.newaxis]
y = np.sin(x)#[np.newaxis]
plt.plot(y)

#Automated window creation
n=100
r=n
b = list()
c = list()
window_length = len(y)
headdings = list()
window_steps = list(range(1, n+1))
for val in window_steps:
    head_string='x-'+ str(val)
    headdings.append(head_string)
    b.append(y[val-1:-n])#Forward window
    #c.append(y[val:window_length-n])
    n=n-1

test=y[r:]
data = pd.DataFrame(dict(zip(headdings,b)))
data.index += 1
test.shape

y_data=np.array(test)
x_data=np.array(b)
x_data=np.rot90(x_data)
x_data=np.rot90(x_data)
x_data=np.rot90(x_data)
plt.plot(x_data)
plt.plot(y_data)

row = round(0.9 * y_data.shape[0])
print(row)
Y_train=y_data[:row]
X_train=x_data[:row,:]
Y_test=y_data[row:]
X_test=x_data[row:,:]
X_train=X_train.reshape(X_train.shape[0], X_train.shape[1],1)
X_test=X_test.reshape(X_test.shape[0], X_test.shape[1],1)

#Create the model
model = Sequential()
model.add(LSTM(X_train.shape[1], activation='tanh', batch_input_shape=(1,X_train.shape[1],1),return_sequences=True,stateful=True))
model.add(LSTM(X_train.shape[1],stateful=True))
#model.add(Dense(100))
#model.add(Dense(50))
#model.add(Dense(30))
#model.add(Dense(5))
model.add(Dense(1))
model.compile(optimizer='Nadam',
              loss='mse',
              metrics=['mae'])
early_stop = EarlyStopping(monitor='mae', patience=1, verbose=1)

#Train the model
model.fit(X_train, 
          Y_train,epochs=70,shuffle=True,batch_size=1)


row=X_train.shape[1]-1
X_sample=y[-row-1:]
X_samp=X_sample.reshape(1, row+1,1)
X_samp[:]
score = model.evaluate(X_test, Y_test,batch_size=1 )
score

#Prediction
pred = model.predict(X_samp).reshape(1,1)
X_samp=X_samp[:,-row:].reshape(1,row)
X_samp=np.concatenate((X_samp,pred),axis=1)
X_samp=X_samp.reshape(1, row+1,1)
print(pred)
window_steps = list(range(1, row+2))
for val in window_steps:
    pred = model.predict(X_samp)
    X_samp=X_samp[:,-row:].reshape(1,row)
    X_samp=np.concatenate((X_samp,pred),axis=1)
    if val != row+1:
        X_samp=X_samp.reshape(1, row+1,1)
test=y[-row-1:].reshape(row+1,1)
X_samp.reshape(row+1,1)
plt.plot(test)
plt.plot(X_samp.T)
plt.show

0 个答案:

没有答案