目前正在与Keras合作开发金融时间序列LSTM模型,并遇到了这个问题。
似乎我的代码生成的是生成一个2维的节点,其中3是预期的,这是代码,
import pandas as pd
import numpy as np
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Input, Dense, GRU, Embedding, LSTM, Flatten
from tensorflow.python.keras.optimizers import RMSprop
from tensorflow.python.keras.callbacks import EarlyStopping,
ModelCheckpoint, TensorBoard, ReduceLROnPlateau
batch_size = 500
feature_no = 13
period_no = 8640
def gen(batch_size, periods):
j = 0
features = ['ask_close',
'ask_open',
'ask_high',
'ask_low',
'bid_close',
'bid_open',
'bid_high',
'bid_low',
'open',
'high',
'low',
'close',
'price']
with pd.HDFStore('datasets/eurusd.h5') as store:
df = store['train_buy']
x_shape = (batch_size, periods, len(features))
x_batch = np.zeros(shape = x_shape, dtype=np.float16)
y_shape = (batch_size, periods)
y_batch = np.zeros(shape = y_shape, dtype=np.float16)
while True:
i = 0
while len(x_batch) < batch_size:
if df.iloc[j+periods]['direction'].values == 1:
x_batch[i] = df.iloc[j:j+periods][features].values.tolist()
y_batch[i] = df.iloc[j+periods]['target_buy'][0].round(4)
i+=1
j+=1
if j == 56241737 - periods:
j = 0
yield x_batch, y_batch
generator = gen(batch_size, period_no)
model = Sequential()
model.add(LSTM(units = 1, return_sequences=True, input_shape = (None, feature_no,)))
optimizer = RMSprop(lr=1e-3)
model.compile(loss = 'mse', optimizer = optimizer)
model.fit_generator(generator=generator, epochs = 10, steps_per_epoch = 112483)
这是错误:
Traceback (most recent call last):
model.fit_generator(generator=generator, epochs = 10, steps_per_epoch = 112483)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\models.py", line 1198, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 2345, in fit_generator
x, y, sample_weight=sample_weight, class_weight=class_weight)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 1981, in train_on_batch
check_batch_axis=True)
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 1514, in _standardize_user_data
exception_prefix='target')
File "C:\Users\Seok\Anaconda3\lib\site-packages\tensorflow\python\keras\_impl\keras\engine\training.py", line 139, in _standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking target: expected lstm_1 to have 3 dimensions, but got array with shape (500, 8640)
我在gibhub上看到了类似的问题,他们似乎已经解决了这个问题,但是那里的解决方案似乎没有解决这个问题。
答案 0 :(得分:1)
LSTM(和GRU)图层需要三维输入:批量大小,多个时间步长和许多要素。
在 input_shape
条款中,它们被指定为(batch size, time steps, no. of features)
。
所以只要注意你的代码,就应该改变
model.add(LSTM(units = 1, return_sequences=True, input_shape = (None, feature_no,)))
到
击> <击> model.add(LSTM(units = 1, return_sequences=True, input_shape = (batch_size, periods, len(features)))
击>
编辑:我的错误,input_shape未指定为三维数组,但需要一个三维数组作为模型的输入。
我认为这里的错误实际上是由输出形状引起的。使用return_sequences = True
时,LSTM的输出的形状为(batch_size, timesteps, units)
,因此生成器应生成y_batch
形状(batch_size, periods, 1)
答案 1 :(得分:0)
找到解决方案 - 1,正如platinum95所说,LSTM层上的return_sequences选项只应在将节点传递给另一个LSTM层时使用,
此外,生成器中y_batch的形状被称为错误。它应该是shape(batch_size)