为什么Keras Tokenizer文本到序列对所有文本返回相同的值?

时间:2019-12-28 17:10:31

标签: python tensorflow machine-learning keras

我正在尝试创建一个Keras LSTM,它将单词分类为0或1。但是,尽管我输入了任何文本,网络都返回接近零的值。我已将问题缩小为与Keras标记程序相关的问题。我包含了一条调试打印语句,并注释了model.predict()代码以测试此问题。所有单词都返回数组[[208]]

下面的代码

from builtins import len

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras import layers
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
import enchant
import re

d = enchant.Dict("en_US")

df = pd.read_csv('sentiments.csv')
df.columns = ["label", "text"]
x = df['text'].values
y = df['label'].values

x_train, x_test, y_train, y_test = \
    train_test_split(x, y, test_size=0.1, random_state=123)

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(x)
xtrain = tokenizer.texts_to_sequences(x_train)
xtest = tokenizer.texts_to_sequences(x_test)

vocab_size = len(tokenizer.word_index) + 1

maxlen = 10
xtrain = pad_sequences(xtrain, padding='post', maxlen=maxlen)
xtest = pad_sequences(xtest, padding='post', maxlen=maxlen)

print(x_train[3])
print(xtrain[3])

embedding_dim = 50
model = Sequential()
model.add(layers.Embedding(input_dim=(vocab_size+1),
                           output_dim=embedding_dim,
                           input_length=maxlen))
model.add(layers.LSTM(units=50, return_sequences=True))
model.add(layers.LSTM(units=10))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(8))
model.add(layers.Dense(1, activation="sigmoid"))
model.compile(optimizer="adam", loss="binary_crossentropy",
              metrics=['accuracy'])
model.summary()
model.fit(xtrain, y_train, epochs=20, batch_size=16, verbose=False)

loss, acc = model.evaluate(xtrain, y_train, verbose=False)
print("Training Accuracy: ", acc)
loss, acc = model.evaluate(xtest, y_test, verbose=False)
print("Test Accuracy: ", acc)

text_input = str(input("Enter a word for analysis: "))

if d.check(text_input):
    word_Arr = []
    word_Arr.append(text_input)
    tokenizer.fit_on_texts(word_Arr)
    word_final = tokenizer.texts_to_sequences(word_Arr)
    word_final_final = np.asarray(word_final)

    print(word_final_final)

    # newArr = np.zeros(shape=(6, 10))
    # newArr[0] = word_final_final

    # print(model.predict(newArr))

我该如何进行?

1 个答案:

答案 0 :(得分:3)

您始终会调整Tokenizer实例:

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(x)

本身带有新输入的单词:

tokenizer.fit_on_texts(word_Arr)

因此,您创建的用于训练模型的令牌将被删除,新装配的Token实例将根据基于您输入的单词的令牌化来对单词进行令牌化。

示例:

tokenizer = Tokenizer(num_words=100)
tokenizer.fit_on_texts(["dog, cat, horse"])
ext_input = str(input("Enter a word for analysis: "))

word_Arr = []
word_Arr.append(text_input)

# here is your problem!!!
tokenizer.fit_on_texts(word_Arr)

word_final = tokenizer.texts_to_sequences(word_Arr)
word_final_final = np.asarray(word_final)

print(word_final_final)

退出:

Enter a word for analysis: dog
[[1]]
Enter a word for analysis: cat
[[1]]

注释出有问题的代码部分:

tokenizer = Tokenizer(num_words=100)

tokenizer.fit_on_texts(["dog, cat, horse"])
ext_input = str(input("Enter a word for analysis: "))

word_Arr = []
word_Arr.append(text_input)

# commenting out your problem!!!
# tokenizer.fit_on_texts(word_Arr)

word_final = tokenizer.texts_to_sequences(word_Arr)
word_final_final = np.asarray(word_final)

print(word_final_final)

退出

Enter a word for analysis: cat
[[2]]
Enter a word for analysis: dog
[[1]]