我的模特是:
model = Sequential()
model.add(Embedding(input_dim=vocab_size,
output_dim=1024, input_length=self.SEQ_LENGTH))
model.add(LSTM(vocab_size))
model.add(Dropout(rate=0.5))
model.add(Dense(vocab_size - 1, activation='softmax'))
我接受了培训。但是现在在推论期间,我该如何使用该嵌入?
答案 0 :(得分:1)
您的问题已解决here。作为骨架,您可以使用以下代码:
from tensorflow.python.keras.preprocessing.text import Tokenizer
tokenizer_obj = Tokenizer()
tokenizer_obj.fit_on_texts(your_dataset)
...
max_length = max_number_words
X_test_tokens = tokenizer_obj.texts_to_sequences(X_test)
X_test_pad = pad_sequences(X_test_tokens, maxlen=max_length, padding='post')
score, acc = model.evaluate(X_test_pad, y_test, batch_size=128)