我正在尝试为我要适应的模型找出正确的语法。这是一个时间序列预测问题,在将时间序列输入LSTM之前,我想使用一些密集层来改进时间序列的表示。
这是我正在处理的虚拟系列:
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
import keras as K
import tensorflow as tf
d = pd.DataFrame(data = {"x": np.linspace(0, 100, 1000)})
d['l1_x'] = d.x.shift(1)
d['l2_x'] = d.x.shift(2)
d.fillna(0, inplace = True)
d["y"] = np.sin(.1*d.x*np.sin(d.l1_x))*np.sin(d.l2_x)
plt.plot(d.x, d.y)
首先,我将安装没有密集层的LSTM。这要求我重塑数据:
X = d[["x", "l1_x", "l2_x"]].values.reshape(len(d), 3,1)
y = d.y.values
这正确吗?
这些教程使单个时间序列在第一个维度上应该具有1,然后是时间步数(1000),然后是协变量数(3)。但是,当我这样做时,模型无法编译。
在这里我编译并训练模型:
model = K.Sequential()
model.add(K.layers.LSTM(10, input_shape=(X.shape[1], X.shape[2]), batch_size = 1, stateful=True))
model.add(K.layers.Dense(1))
callbacks = [K.callbacks.EarlyStopping(monitor='loss', min_delta=0, patience=5, verbose=1, mode='auto', baseline=None, restore_best_weights=True)]
model.compile(loss='mean_squared_error', optimizer='rmsprop')
model.fit(X, y, epochs=50, batch_size=1, verbose=1, shuffle=False, callbacks = callbacks)
model.reset_states()
yhat = model.predict(X, 1)
plt.clf()
plt.plot(d.x, d.y)
plt.plot(d.x, yhat)
为什么我无法使模型过拟合?是因为我错误地重塑了数据吗?当我在LSTM中使用更多的节点时,它并不会真正变得更加适合。
(我也不清楚“有状态”的含义。神经网络只是非线性模型。“状态”指的是哪些参数,为什么要重置它们?)
如何在输入和LSTM之间插入密集层?
最后,我想添加一堆密集层,以便在x
上进行LSTM之前的基础扩展。但是LSTM需要一个3D阵列,并且一个密集层吐出一个矩阵。我在这里做什么?这不起作用:
model = K.Sequential()
model.add(K.layers.Dense(10, activation = "relu", input_dim = 3))
model.add(K.layers.LSTM(3, input_shape=(10, X.shape[2]), batch_size = 1, stateful=True))
model.add(K.layers.Dense(1))
ValueError: Input 0 is incompatible with layer lstm_2: expected ndim=3, found ndim=2
答案 0 :(得分:2)
第一个问题,我在做同样的事情,我没有收到任何错误,请分享您的错误。
注意:我将为您提供使用功能性API的示例,该API几乎没有更多自由(个人观点)
from keras.layers import Dense, Flatten, LSTM, Activation
from keras.layers import Dropout, RepeatVector, TimeDistributed
from keras import Input, Model
seq_length = 15
input_dims = 10
output_dims = 8
n_hidden = 10
model1_inputs = Input(shape=(seq_length,input_dims,))
model1_outputs = Input(shape=(output_dims,))
net1 = LSTM(n_hidden, return_sequences=True)(model1_inputs)
net1 = LSTM(n_hidden, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1
model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')
## Fit the model
model1.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_11 (InputLayer) (None, 15, 10) 0
_________________________________________________________________
lstm_8 (LSTM) (None, 15, 10) 840
_________________________________________________________________
lstm_9 (LSTM) (None, 10) 840
_________________________________________________________________
dense_9 (Dense) (None, 8) 88
_________________________________________________________________
对于第二个问题,有两种方法:
(batch, input_dims)
,则使用可以使用此方法RepeatVector,该方法通过n_steps
重复相同的权重,这算不了什么但在LSTM中为rolling_steps
。{
seq_length = 15
input_dims = 16
output_dims = 8
n_hidden = 20
lstm_dims = 10
model1_inputs = Input(shape=(input_dims,))
model1_outputs = Input(shape=(output_dims,))
net1 = Dense(n_hidden)(model1_inputs)
net1 = Dense(n_hidden)(net1)
net1 = RepeatVector(3)(net1)
net1 = LSTM(lstm_dims, return_sequences=True)(net1)
net1 = LSTM(lstm_dims, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1
model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')
## Fit the model
model1.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_13 (InputLayer) (None, 16) 0
_________________________________________________________________
dense_13 (Dense) (None, 20) 340
_________________________________________________________________
dense_14 (Dense) (None, 20) 420
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 3, 20) 0
_________________________________________________________________
lstm_14 (LSTM) (None, 3, 10) 1240
_________________________________________________________________
lstm_15 (LSTM) (None, 10) 840
_________________________________________________________________
dense_15 (Dense) (None, 8) 88
=================================================================
(seq_len, input_dims)
,则可以TimeDistributed,它在整个序列上重复相同权重的密集层。{
seq_length = 15
input_dims = 10
output_dims = 8
n_hidden = 10
lstm_dims = 6
model1_inputs = Input(shape=(seq_length,input_dims,))
model1_outputs = Input(shape=(output_dims,))
net1 = TimeDistributed(Dense(n_hidden))(model1_inputs)
net1 = LSTM(output_dims, return_sequences=True)(net1)
net1 = LSTM(output_dims, return_sequences=False)(net1)
net1 = Dense(output_dims, activation='relu')(net1)
model1_outputs = net1
model1 = Model(inputs=model1_inputs, outputs = model1_outputs, name='model1')
## Fit the model
model1.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_17 (InputLayer) (None, 15, 10) 0
_________________________________________________________________
time_distributed_3 (TimeDist (None, 15, 10) 110
_________________________________________________________________
lstm_18 (LSTM) (None, 15, 8) 608
_________________________________________________________________
lstm_19 (LSTM) (None, 8) 544
_________________________________________________________________
dense_19 (Dense) (None, 8) 72
=================================================================
注意:为此,我在第一层中使用了两层,即return_sequence
,该层返回了每个时间步的输出,第二层使用了该输出,其中它仅在最后time_step
处返回输出。