LSTM高损耗,不会随着每个时期而减少

时间:2019-02-18 08:52:44

标签: python machine-learning keras nlp lstm

我有一行包含一些文本的行,并且我想为每一行预测下一个单词(word_0-> word_1,而不是word_0和word_1-> word_2,依此类推)。很棒的教程和代码在这里:Predict next word Source code

但是以我的方式,损失并没有减少:

....
Epoch 42/50
2668/2668 [==============] - 1777s 666ms/step - loss: 4.6435 - acc: 0.1361
Epoch 43/50
2668/2668 [==============] - 1791s 671ms/step - loss: 4.6429 - acc: 0.1361
Epoch 44/50
2668/2668 [==============] - 1773s 665ms/step - loss: 4.6431 - acc: 0.1361
Epoch 45/50
2668/2668 [==============] - 1770s 664ms/step - loss: 4.6417 - acc: 0.1361
Epoch 46/50
2668/2668 [==============] - 1774s 665ms/step - loss: 4.6436 - acc: 0.1361
....

我的LSTM NN设置:

nn_model = Sequential()
nn_model.add(Embedding(input_dim=vocab_size, output_dim=embedding_size, 
weights=[pretrained_weights]))
nn_model.add(LSTM(units=embedding_size, return_sequences=True))
nn_model.add(LSTM(units=embedding_size))
nn_model.add(Dense(units=vocab_size))
nn_model.add(Activation('softmax'))
nn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', 
metrics=['accuracy'])

其中:

pretrained_weights = model.wv.syn0 (model is Word2Vec model)
vocab_size, embedding_size = pretrained_weights.shape   

我试图更改batch_size(128,64,20,10);试图添加任何LSTM层,但一切都无济于事。 有什么问题,我该如何解决?

0 个答案:

没有答案
相关问题