keras LSTM功能API多个输入

时间:2019-07-09 05:45:44

标签: tensorflow keras lstm multiple-input

我试图使用两个输入来训练LSTM模型:价格和情感,在对这两个数据进行归一化之后:trainX和trainS,我遵循keras文档来训练模式

print(trainX.shape)
print(trainS.shape)
(22234, 1, 51) --> 51 is because these datasets are time sequence, and I look back for 51 hours of the history price data
(22285, 1)

代码基本上遵循Keras多个输入文档:https://keras.io/getting-started/functional-api-guide/#all-models-are-callable-just-like-layers 但是当我拟合模型时出现错误

Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ..., 0., 0., 0.]],

       ...,

       [[0., 0., 0., ..., 0., 0., 0.]],

       [[0., 0., 0., ....

from keras.layers import Input, Embedding, LSTM, Dense
from keras.models import Model

# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
# Note that we can name any layer by passing it a "name" argument.
main_input = Input(shape=(trainX.shape[0],), dtype='int32', name='main_input')

# This embedding layer will encode the input sequence
# into a sequence of dense 512-dimensional vectors.
x = Embedding(output_dim=512, input_dim=10000, input_length=trainX.shape[0])(main_input)

# A LSTM will transform the vector sequence into a single vector,
# containing information about the entire sequence
lstm_out = LSTM(32)(x)
auxiliary_output = Dense(2, activation='sigmoid', name='aux_output')(lstm_out)

import keras
auxiliary_input = Input(shape=(trainS.shape[0],), name='aux_input')
x = keras.layers.concatenate([lstm_out, auxiliary_input])

# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

# And finally we add the main logistic regression layer
main_output = Dense(2, activation='sigmoid', name='main_output')(x)

auxiliary_output = Dense(2, activation='sigmoid', name='aux_output')(lstm_out)

auxiliary_input = Input(shape=(5,), name='aux_input')
x = keras.layers.concatenate([lstm_out, auxiliary_input])

# We stack a deep densely-connected network on top
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)
x = Dense(64, activation='relu')(x)

# And finally we add the main logistic regression layer
main_output = Dense(2, activation='sigmoid', name='main_output')(x)
model = Model(inputs=[main_input, auxiliary_input], outputs=[main_output, auxiliary_output])

model.compile(optimizer='rmsprop', loss='binary_crossentropy',
              loss_weights=[1., 0.2])
model.fit(trainX, trainS, epochs=100, batch_size=1, verbose=2, shuffle=False)

1 个答案:

答案 0 :(得分:0)

模型拟合调用必须传递np.array的列表,以使它们的批处理大小相同,并且其余维必须与为输入/目标定义的尺寸匹配。

即你需要打电话

model.fit([input0, input1], [output0, output1])

所有这些都必须具有相同的形状[0]。

我在您的代码中注意到以下内容:

main_input = Input(shape=(trainX.shape[0],)

这是不正确的。您希望输入的形状为trainX.shape[1:]。无需定义批次大小,但必须定义其他尺寸。

相关问题