我经常打成这样,
ValueError: Input 0 of layer sequential is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [10 ,3]
我四处搜寻,发现了
LSTM layer expects inputs to have shape of (batch_size, timesteps, input_dim)
好的,但是老实说我还是有些困惑。
例如,我有这样的训练数据
x_train (100,3) #it consists of like `[[1,2,3],[3,4,5],[5,6,7]]`
y_train (100,3) #answers
我想使用10组3对数字并预测[7,8,9]
下的3对数字。
从x_train [1〜10]到y_train [11]一样的猜测
下面的代码有效,但是,我仍然不清楚
在input_shape=(3,1)
处1
的含义是什么??。应该是3
(我要最终确定的尺寸)
batch_size
是LSTM请求的第一个参数。
所以,当我想从过去10个项目中预测一项时,在此处设置10是否正确?
x_train = np.array(x).reshape(100, 3,1)
y_train = np.array(x).reshape(100, 3,1)
model.add(LSTM(512, activation=None, input_shape=(3, 1), return_sequences=True))
model.add(Dense(1, activation="linear"))
opt = Adam(lr=0.001)
model.compile(loss='mse', optimizer=opt)
model.summary()
history = model.fit(x_train, y_train, epochs=epoch, batch_size=10) // how to set batch size???
答案 0 :(得分:2)
尝试以下代码:
import tensorflow as tf
import numpy as np
x = np.random.uniform(0, 10, [101, 3])
x_train = np.array(x[:-1]).reshape(-1, 5, 3) # your data comprise of 20 sequences
y_train = np.array(x[1:]).reshape(-1, 5, 3)
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(512, activation=None, input_shape=(None, 3), return_sequences=True))
model.add(tf.keras.layers.Dense(1, activation="linear"))
opt = tf.keras.optimizers.Adam(lr=0.001)
model.compile(loss='mse', optimizer=opt)
model.summary()
history = model.fit(x_train, y_train, epochs=10, batch_size=10) # here you can set a batch size (your 20 sequences will be splitted into two batches)