这是我定义的模型。
encoder_input = Input(batch_shape=(BATCH_SIZE, MAX_SENTENCE_SIZE, VOCAB_SIZE),
name='encoder_input')
mask = Masking(mask_value=0.0)(encoder_input)
encoded = LSTM(ENCODING_SIZE, return_sequences=False, name='encoded')(mask)
encoder = Model(encoder_input, encoded)
# make decoder model
decoder_input = Input(shape=(BATCH_SIZE, ))
decoded = RepeatVector(MAX_SENTENCE_SIZE)(decoder_input)
decoded = LSTM(VOCAB_SIZE, return_sequences=True)(decoded)
decoded = TimeDistributed(Dense(VOCAB_SIZE, activation='softmax',
name='decoded'))(decoded)
decoder = Model(decoder_input, decoded)
# make sequence autoencoder
encoder_decoder_input = Input(batch_shape=(BATCH_SIZE, MAX_SENTENCE_SIZE, VOCAB_SIZE),
name='encoder_decoder_input')
encoder_output = encoder(encoder_decoder_input)
decoder_output = decoder(encoder_output)
sequence_autoencoder = Model(encoder_decoder_input, decoder_output)
sequence_autoencoder.compile(loss='categorical_crossentropy', optimizer='adam')
当我尝试训练它时,使用train_on_batch并输入正确尺寸的输入和目标,它给了我这个错误:
Invalid argument: You must feed a value for placeholder tensor 'encoder_input' with dtype float and shape [8,64,10]
[[Node: encoder_input = Placeholder[dtype=DT_FLOAT, shape=[8,64,10], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
哪个没有意义。任何想法为什么会发生这种情况?