我正在按照此示例代码使用keras构建seq2seq模型。 https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq.py
当我训练该代码时,它通常可以正常工作,并且效果很好。但是,当我尝试使用预训练的嵌入模型对其进行训练时,损耗和交叉熵始终为负值。
我试图仅使用5个示例的数据集来使模型过拟合,以确保其工作正确,但是损失和交叉熵仍然为负。
我使用FastText嵌入模型,这是使用嵌入向量加载数据集的代码:
encoder_input_data = np.zeros(
(input_texts_len, max_encoder_seq_length,vector_length),
dtype='float32')
decoder_input_data = np.zeros(
(input_texts_len, max_decoder_seq_length,vector_length),
dtype='float32')
decoder_target_data = np.zeros(
(input_texts_len, max_decoder_seq_length,vector_length),
dtype='float32')
padding = np.zeros((vector_length),dtype='float32')
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
for t, word in enumerate(input_text):
encoder_input_data[i, t] = w2v.get_vector(word)
encoder_input_data[i, t + 1:] = padding
for t, word in enumerate(target_text):
decoder_input_data[i, t] = w2v.get_vector(word)
if t > 0:
decoder_target_data[i, t - 1] = w2v.get_vector(word)
decoder_input_data[i, t + 1:] = padding
decoder_target_data[i, t] = padding
这是模型代码本身:
encoder_inputs = Input(shape=(max_encoder_seq_length,vec_leng,))
x = Masking(mask_value=0.0)(encoder_inputs)
encoder = LSTM(latent_dim,name='lstm_1')
encoder_outputs, state_h, state_c = encoder(x)
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(max_decoder_seq_length,vec_leng,))
a = Masking(mask_value=0.0) (decoder_inputs)
decoder_lstm = LSTM(latent_dim,name='decoder_lstm')
decoder_outputs, _, _ = decoder_lstm(a, initial_state=encoder_states)
# Attention layer
attn_layer = AttentionLayer(name='attention_layer')
attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs])
decoder_concat_input = Concatenate(axis=-1)([decoder_outputs, attn_out])
decoder_dense = Dense(vec_leng, activation='softmax')
dense_time = TimeDistributed(decoder_dense, name='time_distributed_layer')
decoder_pred = dense_time(decoder_concat_input)
model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_pred, name='main_model')
encoder_model = Model(inputs=encoder_inputs, outputs=[encoder_outputs, state_h, encoder_states], name='encoder_model')
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
encoder_states_ = Input(batch_shape=(1,max_encoder_seq_length, latent_dim))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
a = Input(shape=(max_decoder_seq_length,vec_leng,))
decoder_outputs, state_h, state_c = decoder_lstm(a, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
attn_inf_out, attn_inf_states = attn_layer([encoder_states_, decoder_outputs])
decoder_inf_concat = Concatenate(axis=-1)([decoder_outputs, attn_inf_out])
decoder_inf_pred = TimeDistributed(decoder_dense)(decoder_inf_concat)
decoder_model = Model(
[encoder_states_, decoder_states_inputs, a],
[decoder_inf_pred, attn_inf_states, decoder_states], name='decoder_model')
我得到这些负值的原因是什么?以及如何解决它们?
答案 0 :(得分:0)
由于目标向量元素不正确,您将获得负损失值,one_hot目标向量元素必须为1或0整数。