我正在尝试使用GloVe在我的生成模型中实现预训练的嵌入层。
我向模型中输入从文本中提取的50(X)个项目的序列,以预测文本中的51.单词(y)。
当模型仅训练1/100次迭代时,我已经达到0.99的精度。可能是什么问题?
# create a weight matrix for words in training docs
embedding_matrix = zeros((vocab_size, 100))
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# define model
model = Sequential() #assigning the sequential function to a model
model.add(Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=seq_length, trainable = False)) #defining embedding layer size
model.add(LSTM(100, return_sequences=True)) #adding layer of nodes
model.add(LSTM(100)) #adding layer of nodes
model.add(Dense(100, activation='relu')) #specifying the structure of the hidden layer, recu is an argument of a rectified linear unit.
model.add(Dense(vocab_size, activation='softmax')) #using the softmax function to creating probabilities
print(model.summary())
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# fit the model
model.fit(X, y, batch_size=128, epochs=100, verbose=1)
链接到github:https://github.com/KiriKoppelgaard/Generative_model 从2018年11月14日起提交