在教程https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html中,我们有一层seq2seq模型。我想在编码器侧增加一层,在解码器侧增加一层。训练似乎正在奏效,但是我无法在多层的推理环境中正确设置解码器。这是我对教程中提到的模型的更改。
编码器:
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder1 = LSTM(
latent_dim,
return_sequences=True
)
encoder2 = LSTM(
latent_dim,
return_state=True,
)
x=encoder1(encoder_inputs)
encoder_outputs, state_h, state_c = encoder2(x)
解码器:
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder1 = LSTM(
latent_dim,
return_sequences=True
)
decoder2 = LSTM(
latent_dim,
return_sequences=True, return_state=True
)
dx = decoder1(decoder_inputs, initial_state=encoder_states)
decoder_outputs, _, _ = decoder2(dx)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
推论(这是我不知道如何创建多层解码器的部分)当前无法实现的实现如下:
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
out_decoder1 = LSTM(
latent_dim,
return_sequences=True, return_state=True
)
out_decoder2 = LSTM(
latent_dim,
return_sequences=True, return_state=True
)
odx = out_decoder1(decoder_inputs, initial_state=decoder_states_inputs)
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, state_h, state_c = out_decoder2(odx)
#decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0, -1, :])
print(output_tokens)
print(sampled_token_index)
sampled_char = reverse_target_char_index[sampled_token_index]
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# Update states
states_value = [h, c]
return decoded_sentence
for seq_index in range(1):
# Take one sequence (part of the training set)
# for trying out decoding.
input_seq = encoder_input_data[seq_index: seq_index + 1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Input sentence:', input_texts[seq_index])
print('Decoded sentence:', decoded_sentence)
Thnx
答案 0 :(得分:1)
经过几天在同一个问题上的努力,这是我发现正在工作的内容:
False
现在让我们看一下解码(也是最困难的)位:
# using multiple LSTM layers for encoding is not a problem at all.
# Here I used 3. Pay attention to the flags. The sequence of the last
# layer is not returned because we want a single vector that stores everything, not a time-sequence...
encoder_input = Input(shape=(None, num_allowed_chars), name='encoder_input')
encoder_lstm1 = LSTM(state_size, name='encoder_lstm1',
return_sequences=True, return_state=True)
encoder_lstm2 = LSTM(state_size, name='encoder_lstm2',
return_sequences=True, return_state=True)
encoder_lstm3 = LSTM(state_size, name='encoder_lstm3',
return_sequences=False, return_state=True)
# Connect all the LSTM-layers.
x = encoder_input
x, _, _ = encoder_lstm1(x)
x, _, _ = encoder_lstm2(x)
# only the states of the last layer are of interest.
x, state_h, state_c = encoder_lstm3(x)
encoder_output = x # This is the encoded, fix-sized vector which seq2seq is all about
encoder_states = [state_h, state_c]
这是解码器连接进行推理的部分。用于训练的解码器设置略有不同
# here is something new: for every decoding layer we need an Input variable for both states hidden (h)
# and cell state (c). Here I will use two stacked decoding layers and therefore initialize h1,c1,h2,c2.
decoder_initial_state_h1 = Input(shape=(state_size,),
name='decoder_initial_state_h1')
decoder_initial_state_c1 = Input(shape=(state_size,),
name='decoder_initial_state_c1')
decoder_initial_state_h2 = Input(shape=(state_size,),
name='decoder_initial_state_h2')
decoder_initial_state_c2 = Input(shape=(state_size,),
name='decoder_initial_state_c2')
decoder_input = Input(shape=(None, num_allowed_chars), name='decoder_input')
# pay attention of the return_sequence and return_state flags.
decoder_lstm1 = LSTM(state_size, name='decoder_lstm1',
return_sequences=True, return_state=True)
decoder_lstm2 = LSTM(state_size, name='decoder_lstm2',
return_sequences=True, return_state=True)
decoder_dense = Dense(
num_allowed_chars, activation='softmax', name="decoder_output")
# connect the decoder for training (initial state = encoder_state)
# I feed the encoder_states as inital input to both decoding lstm layers
x = decoder_input
x, h1, c1 = decoder_lstm1(x, initial_state=encoder_states)
# I tried to pass [h1, c1] as initial states in line below, but that result in rubbish
x, _, _ = decoder_lstm2(x, initial_state=encoder_states)
decoder_output = decoder_dense(x)
model_train = Model(inputs=[encoder_input, decoder_input],
outputs=decoder_output)
model_encoder = Model(inputs=encoder_input,
outputs=encoder_states)
这是解码部分。根据您的代码,只需注意我如何预测循环外的编码矢量,然后重复该编码矢量,以便可以将其输入到decoder_model.predict中并输入到两个lstm层。
第二个棘手的问题是从.predict()获取所有四个输出状态,并在下一个步骤将其反馈给预测。
# this decoder model setup is used for inference
# important! Every layer keeps its own states. This is, again, important in decode_sequence()
x = decoder_input
x, h1, c1 = decoder_lstm1(
x, initial_state=[decoder_initial_state_h1, decoder_initial_state_c1])
x, h2, c2 = decoder_lstm2(
x, initial_state=[decoder_initial_state_h2, decoder_initial_state_c2])
decoder_output = decoder_dense(x)
decoder_states = [h1, c1, h2, c2]
model_decoder = Model(
inputs=[decoder_input] + [decoder_initial_state_h1, decoder_initial_state_c1,
decoder_initial_state_h2, decoder_initial_state_c2],
outputs=[decoder_output] + decoder_states) # model outputs h1,c1,h2,c2!
model_train.summary()
model_train.compile(optimizer='rmsprop',
loss='categorical_crossentropy', metrics=["acc"])
plot_model(model_train, to_file=data_path_prefix +
'spellchecker/model_train.png')
plot_model(model_encoder, to_file=data_path_prefix +
'spellchecker/model_encode.png')
plot_model(model_decoder, to_file=data_path_prefix +
'spellchecker/model_decode.png')
我希望这会有所帮助。有数百万个简单的1层示例,但没有更多示例。显然,它现在很容易扩展到两个以上的解码层。
祝你好运! (我的第一个答案是这样的:-)! )
答案 1 :(得分:0)
我所做的更改很少,而且看起来工作正常。
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state= True, return_sequences=True)
encoder2 = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder2(encoder(encoder_inputs))
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder2 = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder2(decoder(decoder_inputs, initial_state=encoder_states))
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# Define sampling models
encoder_model = Model(encoder_inputs, encoder_states)
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder(
decoder_inputs, initial_state=decoder_states_inputs)
decoder2_outputs, state_h2, state_c2 = decoder2(decoder(decoder_inputs, initial_state=[state_h, state_c]))
decoder_states = [state_h2, state_c2]
decoder_outputs = decoder_dense(decoder2_outputs)
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
看看是否可行。