这里,我在Keras编码方面遇到一些问题。我需要使用两种类型的嵌入处理两个顺序的输入,一种是单词嵌入,另一种是doc2vec嵌入,两者都是dim=300
。然后,我要将这两个向量连接为一个更长的向量,因为我想从它们中获取一些堆叠的特征。但是,这两个嵌入可能位于不同的空间中,因此我必须使用nn.flatten()
将这两个向量映射到相同的向量中。然后,我需要将输出向量从展平输入到LSTM模型。但是编译器抱怨Input 0 is incompatible with lstm_1: expected ndim=3, found ndim=2
,我根本没有设置ndim=3
,而且我不知道如何将向量重塑为具有正确形状的新输入。
请帮助解决这个问题。
n_hidden = 50
batch_size = 64
def classification_softmax(left, right):
''' Helper function for the similarity estimate of the LSTMs outputs'''
return K.abs(left - right)
embedding_layer = Embedding(len(embeddings), 300, weights=[embeddings], input_length=max_seq_length,
trainable=False)
embedding_cfg_layer = Embedding(len(cfg_embedding_matrix), 300, weights=[cfg_embedding_matrix], input_length=1,
trainable=False)
#cfg_embedding_l=krs.layers.Flatten()(embedding_cfg_layer(cfg_left_input))
#cfg_embedding_r=krs.layers.Flatten()(embedding_cfg_layer(cfg_right_input))
#encoded_left = krs.layers.Concatenate(axis=1)([krs.layers.Flatten()(embedding_layer(left_input)),cfg_embedding_l])
#encoded_right = krs.layers.Concatenate(axis=1)([krs.layers.Flatten()(embedding_layer(right_input)), cfg_embedding_r])
encoded_left = encoded_left
encoded_right = encoded_right
# Since this is a siamese network, both sides share the same LSTM
shared_lstm = LSTM(n_hidden,return_sequences=True)
#encoded_left=krs.layers.Reshape((2,))(encoded_left)
#encoded_right=krs.layers.Reshape((2,))(encoded_right)
left_output = shared_lstm(encoded_left)
right_output = shared_lstm(encoded_right)
cfg_embedding_l=embedding_cfg_layer(cfg_left_input)
cfg_embedding_r=embedding_cfg_layer(cfg_right_input)
encoded_left = krs.layers.Concatenate(axis=0)([(embedding_layer(left_input),cfg_embedding_l])
encoded_right = krs.layers.Concatenate(axis=0)(embedding_layer(right_input), cfg_embedding_r])
...
dist = Lambda(lambda x: classification_softmax(x[0], x[1]))([left_output, right_output])
classify = Dense(5, activation=softMaxAxis1)(dist)
# Pack it all up into a model
malstm = Model([left_input, right_input,cfg_left_input,cfg_right_input], [classify])
optimizer = Adadelta(clipnorm=gradient_clipping_norm)
# malstm.compile(loss='mean_squared_error', optimizer='adam', metrics=
['accuracy', f1, recall,precision])
malstm.compile(loss='categorical_crossentropy', optimizer='adam', metrics=
[categorical_accuracy])#, f1, recall, precision])
# Start training
training_start_time = time()
malstm_trained = malstm.fit(
[X_train['left'], X_train['right'], X_train['cfg_A'], X_train['cfg_B']],
krs.utils.to_categorical(Y_train, 5),
batch_size=batch_size, nb_epoch=n_epoch,
#callbacks=[metrics],
validation_data=(
[X_validation['left'], X_validation['right'],
X_validation['cfg_A'],X_validation['cfg_B']],
krs.utils.to_categorical(Y_validation, 5)))
答案 0 :(得分:0)
我无法从示例代码中分辨出您输入的确切形状,因此我无法为您提供确切形状的答案,但是在这种情况下,您应该使用Reshape
层。
由于第一层(batch_size, 300 * max_seq_length + 300)
输出张量Embedding
,第二层输出{{1} }张量,然后将它们展平并连接起来。
您想将2D (max_seq_length, 300)
重塑为(1, 300)
之类的3D形状。添加具有目标形状的(batch_size, 300 * max_seq_length + 300)
层:
(batch_size, 300 * max_seq_length + 300, 1)
然后将它们传递到您的LSTM中。