使用Input
时遇到错误,其中Embedding
是我的第一层。尽管我在(,9)
中清楚地提到了形状,但找不到Input()
形状的张量。有人可以帮我吗?
代码如下:
def model_3(src_vocab, tar_vocab, src_timesteps, tar_timesteps, n_units):
_nput = Input(shape=[src_timesteps], dtype='int32')
embedding = Embedding(input_dim = src_vocab, output_dim = n_units, input_length=src_timesteps, mask_zero=False)(_nput)
activations = LSTM(n_units, return_sequences=True)(embedding)
attention = Dense(1, activation='tanh')(activations)
attention = Flatten()(attention)
attention = Activation('softmax')(attention)
attention = RepeatVector(tar_timesteps)(attention)
activations = Permute([2,1])(activations)
sent_representation = dot([attention,activations], axes=-1)
sent_representation = LSTM(n_units, return_sequences=True)(sent_representation)
sent_representation = TimeDistributed(Dense(tar_vocab, activation='softmax'))(sent_representation)
model = Model(input=_nput,output=sent)
model.compile(optimizer='adam', loss='categorical_crossentropy')
print(model.summary())
plot_model(model, to_file='model.png', show_shapes=True)