ValueError:第lstm_3层需要35个输入,但收到了3个输入张量

时间:2020-10-10 02:54:16

标签: tensorflow keras deep-learning lstm encoder-decoder

我正在尝试构建一个序列到序列的编码器解码器网络以进行语言翻译(英语到法语),我使用三个带漏失的BLSTM层作为编码器和一个LSTM解码器。

对于模型来说,拟合还可以,但是我在推理模型中一直遇到错误。

错误提示:

ValueError: Layer lstm_3 expects 35 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor 'embedding_1/embedding_lookup_25/Identity_1:0' shape=(None, None, 128) dtype=float32>, <tf.Tensor 'input_87:0' shape=(None, 128) dtype=float32>, <tf.Tensor 'input_88:0' shape=(None, 128) dtype=float32>]

这是我的模特:

latent_dim = 128 

# Encoder 
encoder_inputs = Input(shape=(max_length_english,)) 
enc_emb = Embedding(vocab_size_source, latent_dim,trainable=True)(encoder_inputs) 

#LSTM 1 
encoder_lstm1 = LSTM(latent_dim, recurrent_dropout= 0.6,return_sequences=True,return_state=True) 
encoder_output1, state_h1, state_c1 = encoder_lstm1(enc_emb) 

#LSTM 2 
encoder_lstm2 = LSTM(latent_dim, recurrent_dropout= 0.6,return_sequences=True,return_state=True) 
encoder_output2, state_h2, state_c2 = encoder_lstm2(encoder_output1) 

#LSTM 3 
encoder_lstm3=LSTM(latent_dim, recurrent_dropout= 0.6, return_state=True, return_sequences=True) 
encoder_outputs, state_h, state_c= encoder_lstm3(encoder_output2) 

# Set up the decoder. 
decoder_inputs = Input(shape=(None,)) 
dec_emb_layer = Embedding(vocab_size_target, latent_dim,trainable=True) 
dec_emb = dec_emb_layer(decoder_inputs) 

#LSTM using encoder_states as initial state
decoder_lstm = LSTM(latent_dim, recurrent_dropout= 0.6, return_sequences=True, return_state=True) 
decoder_outputs,decoder_fwd_state, decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c]) 


#Dense layer
decoder_dense = Dense(vocab_size_target, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs) 

# Define the model
model1 = Model([encoder_inputs, decoder_inputs], decoder_outputs) 

这是我的推断模式:

latent_dim=128

# encoder inference
encoder_inputs = model_loaded.input[0]  #loading encoder_inputs
encoder_outputs, state_h, state_c = model_loaded.layers[6].output #loading encoder_outputs

print(encoder_outputs.shape)

encoder_model = Model(inputs=encoder_inputs,outputs=[encoder_outputs, state_h, state_c])

# decoder inference
# Below tensors will hold the states of the previous time step
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_hidden_state_input = Input(shape=(32,latent_dim))

# Get the embeddings of the decoder sequence
decoder_inputs = model_loaded.layers[3].output

print(decoder_inputs.shape)
dec_emb_layer = model_loaded.layers[5]

dec_emb2= dec_emb_layer(decoder_inputs)

# To predict the next word in the sequence, set the initial states to the states from the previous time step
decoder_lstm = model_loaded.layers[7]
decoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=[decoder_state_input_h, decoder_state_input_c])


# A dense softmax layer to generate prob dist. over the target vocabulary
decoder_dense = model_loaded.layers[8]
decoder_outputs = decoder_dense(decoder_outputs2)

# Final decoder model
decoder_model = Model(
[decoder_inputs] + [decoder_hidden_state_input,decoder_state_input_h, decoder_state_input_c],
[decoder_outputs2] + [state_h2, state_c2])

对于优化器rmsprop和损失

model1.compile(optimizer='rmsprop' and for loss 'sparse_categorical_crossentropy'

              loss='sparse_categorical_crossentropy', #sparse_categorical_crossentropy 
              metrics=['accuracy'])

最后对于57个历元之后的val_loss和val_accury,我得到以下结果:

Epoch 57/100
55/55 [==============================] - 197s 4s/step - loss: 0.7188 - accuracy: 0.8474 - val_loss: 0.9559 - val_accuracy: 0.8271
Epoch 00057: early stopping

0 个答案:

没有答案