我想创建一个由嵌入层组成的Keras模型,然后是两个带有0.5的丢失的LSTM,最后是一个带有softmax激活的密集层。
第一个LSTM应该将顺序输出传播到第二层,而在第二个层次,我只对处理整个序列后得到LSTM的隐藏状态感兴趣。
我尝试了以下内容:
sentence_indices = Input(input_shape, dtype = 'int32')
embedding_layer = pretrained_embedding_layer(word_to_vec_map, word_to_index)
embeddings = embedding_layer(sentence_indices)
# Propagate the embeddings through an LSTM layer with 128-dimensional hidden state
X = LSTM(128, return_sequences=True, dropout = 0.5)(embeddings)
# Propagate X trough another LSTM layer with 128-dimensional hidden state
X = LSTM(128, return_sequences=False, return_state=True, dropout = 0.5)(X)
# Propagate X through a Dense layer with softmax activation to get back a batch of 5-dimensional vectors.
X = Dense(5, activation='softmax')(X)
# Create Model instance which converts sentence_indices into X.
model = Model(inputs=[sentence_indices], outputs=[X])
但是我收到以下错误:
ValueError: Layer dense_5 expects 1 inputs, but it received 3 input tensors. Input received: [<tf.Tensor 'lstm_10/TensorArrayReadV3:0' shape=(?, 128) dtype=float32>, <tf.Tensor 'lstm_10/while/Exit_2:0' shape=(?, 128) dtype=float32>, <tf.Tensor 'lstm_10/while/Exit_3:0' shape=(?, 128) dtype=float32>]
显然LSTM没有返回我期望的形状的输出。我该如何解决这个问题?
答案 0 :(得分:1)
如果设置dtype
,则return_state=True
返回三件事:输出,最后一个隐藏状态和最后一个单元格状态。
执行LSTM(...)(X)
代替X = LSTM(128, return_sequences=False, return_state=True, dropout = 0.5)(X)
有关示例,请参见here。