负损耗值-Seq2seq模型Keras

时间:2020-07-18 18:16:24

标签: keras loss-function seq2seq

我正在尝试使用Keras中的seq2seq模型构建一个聊天机器人。我使用了Keras Blog中指定的标准seq2seq模型。我已经使用Word2vec进行单词嵌入。我的问题是,我在训练中损失越来越负。 为什么会发生这种情况,我该如何解决?谢谢。

from keras.models import Model
from keras.layers import Input, LSTM, Dense

# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens))
encoder = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]

# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens))
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the 
# return states in the training model, but we will use them in inference.
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
                                     initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)

model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit([encoder_input_data, decoder_input_data], decoder_target_data,
          batch_size=batch_size,
          epochs=epochs,
          validation_split=0.2)

1 个答案:

答案 0 :(得分:0)

我找到了负值的原因。 主要是因为word2vec中的向量表示包含负值,因此损失函数无法正确计算损失

在word2vec模型中,使用一个热编码器编码所有单词的词汇表,而不是使用解码器_target_data中的向量表示法