Keras-LSTM,每个时间步都有2个单词的嵌入

时间:2018-08-31 12:31:08

标签: python keras concatenation lstm embedding

下面的代码构造了一个LSTM模型。 我想更改此精确模型,使其在开始时具有一个嵌入层,该层在每个时间步骤接收2个不同的单词,然后将它们嵌入(具有相同的嵌入层):将它们的嵌入连接起来,然后遵循我的模型其余部分

k_model = Sequential()

k_model.add(LSTM(int(document_max_num_words*1.5), input_shape=(document_max_num_words, num_features)))
k_model.add(Dropout(0.3))
k_model.add(Dense(num_categories))
k_model.add(Activation('sigmoid'))

k_model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

1 个答案:

答案 0 :(得分:0)

如果我正确理解了您的问题,假设输入数据的形状为(n_samples, n_timesteps, 2)(即每步两个单词),则可以使用TimeDistributed包装器来实现您要查找的内容:

from keras import layers
from keras import models

n_vocab = 1000
n_timesteps = 500
embed_dim = 128
words_per_step = 2

model = models.Sequential()
model.add(layers.TimeDistributed(layers.Embedding(n_vocab, embed_dim), input_shape=(n_timesteps, words_per_step)))
model.add(layers.TimeDistributed(layers.Flatten()))
# the rest of the model

model.summary()

模型摘要:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
time_distributed_12 (TimeDis (None, 500, 2, 128)       128000    
_________________________________________________________________
time_distributed_13 (TimeDis (None, 500, 256)          0         
=================================================================
Total params: 128,000
Trainable params: 128,000
Non-trainable params: 0
_________________________________________________________________