连接keras图层时出错:“ Concatenate”图层需要具有匹配形状的输入

时间:2019-08-28 20:04:33

标签: python-3.x keras deep-learning concatenation

我有一个名为Hierarchical Attention Networks的模型: enter image description here

提议用于文档分类。我将word2vec嵌入用于句子单词,并希望在A点处连接另一个句子级别的嵌入(见图)。

我将它与包含3个句子的文档一起使用;模型摘要: enter image description here

word_input = Input(shape=(self.max_senten_len,), dtype='float32')
word_sequences = self.get_embedding_layer()(word_input)
word_lstm = Bidirectional(self.hyperparameters['rnn'](self.hyperparameters['rnn_units'], return_sequences=True, kernel_regularizer=kernel_regularizer))(word_sequences)
word_dense = TimeDistributed(Dense(self.hyperparameters['dense_units'], kernel_regularizer=kernel_regularizer))(word_lstm)
word_att = AttentionWithContext()(word_dense)
wordEncoder = Model(word_input, word_att)
sent_input = Input(shape=(self.max_senten_num, self.max_senten_len), dtype='float32')
sent_encoder = TimeDistributed(wordEncoder)(sent_input)

""" I added these following 2 lines. The dimension of self.training_features is (number of training rows, 3, 512). 512 is the dimension of the sentence-level embedding.  """
USE = Input(shape=(self.training_features.shape[1], self.training_features.shape[2]), name='USE_branch')
merge = concatenate([sent_encoder, USE], axis=1)

sent_lstm = Bidirectional(self.hyperparameters['rnn'](self.hyperparameters['rnn_units'], return_sequences=True, kernel_regularizer=kernel_regularizer))(merge)
sent_dense = TimeDistributed(Dense(self.hyperparameters['dense_units'], kernel_regularizer=kernel_regularizer))(sent_lstm)
sent_att = Dropout(dropout_regularizer)(AttentionWithContext()(sent_dense))
preds = Dense(len(self.labelencoder.classes_))(sent_att)
self.model = Model(sent_input, preds)

编译上面的代码时,出现以下错误:

  

ValueError:Concatenate层需要具有匹配形状的输入   除了concat轴。得到了输入形状:[(无,3、128),(无,   3,514)]

我指定了串联轴= 1,以串联(3)个句子的数量,但是我不知道为什么我仍然遇到错误。

2 个答案:

答案 0 :(得分:0)

这是因为如果指定该轴,则形状不匹配。如果您这样做,它将起作用:

{{1}}

现在,其余轴上没有形状冲突

答案 1 :(得分:0)

该错误归因于两行:

merge = concatenate([sent_encoder, USE], axis=1)
# should be:
merge = concatenate([sent_encoder, USE], axis=2) # or -1 as @mlRocks suggested

和该行:

self.model = Model(sent_input, preds)
# should be:
self.model = Model([sent_input, USE], preds) # to define both inputs