我正在尝试使用Keras建立LTSM模型。训练数据的尺寸为[7165,27],在我当前的设置下,它会引发以下错误:
File "C:\Users\Eier\Anaconda3\lib\site-packages\keras\models.py", line 441, in __init__
self.add(layer)
File "C:\Users\Eier\Anaconda3\lib\site-packages\keras\models.py", line 497, in add
layer(x)
File "C:\Users\Eier\Anaconda3\lib\site-packages\keras\layers\recurrent.py", line 500, in __call__
return super(RNN, self).__call__(inputs, **kwargs)
File "C:\Users\Eier\Anaconda3\lib\site-packages\keras\engine\topology.py", line 575, in __call__
self.assert_input_compatibility(inputs)
File "C:\Users\Eier\Anaconda3\lib\site-packages\keras\engine\topology.py", line 474, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer lstm_64: expected ndim=3, found ndim=4
我知道这个错误相当普遍,但是网上找到的许多不同解决方案中没有一个对我有用。我已经尝试过将训练数据重塑为3D矩阵,用不同的图层组合鬼混,明确说明批处理大小,使用Flatten()等等都无济于事。如果有人能朝正确的方向推动我解决这个问题,将不胜感激。
代码段:
input_dim = 27
units = 5
timesteps = 1
samples = X_train.shape[0]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
model = Sequential([
LSTM(units, return_sequences=True, stateful = True, input_shape=(samples,timesteps,input_dim)),
Dropout(0.2),
LSTM(units,return_sequences=False),
Dropout(0.2),
Dense(1),
Activation('softmax'),
])
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
model.fit(X_train, y_train, batch_size = 32, epochs = 60)
答案 0 :(得分:0)
正如@ShubhamPanchal在评论中指出的那样,您无需指定样本尺寸。 LSTM层期望输入具有[batch_size,time_steps,通道)的形状,因此,当您传递input_shape参数时,您必须传递指定time_steps和通道尺寸的元组。
LSTM(32, return_sequences=True, stateful = True, input_shape=(time_steps, input_dim))
由于使用的是有状态的lstm,因此还需要指定batch_size参数。因此,该模型的完整代码应为
model = Sequential([
LSTM(units, return_sequences=True, stateful = True, input_shape=(timesteps,input_dim), batch_size=batch_size),
Dropout(0.2),
LSTM(units,return_sequences=False),
Dropout(0.2),
Dense(1),
Activation('softmax'),
])