我正在使用:
from keras.preprocessing.text import Tokenizer
max_words = 10000
text = 'Decreased glucose-6-phosphate dehydrogenase activity along with oxidative stress affects visual contrast sensitivity in alcoholics.'
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(text)
sequences = tokenizer.texts_to_sequences(text)
print(sequences)
结果如下:
[[8], [2], [7], [12], [2], [5], [1], [2], [8], [], [14], [9], [16], [7], [6], [1], [2], [], [19], [], [17], [10], [6], [1], [17], [10], [5], [3], [2], [], [8], [2], [10], [15], [8], [12], [6], [14], [2], [11], [5], [1], [2], [], [5], [7], [3], [4], [13], [4], [3], [15], [], [5], [9], [6], [11], [14], [], [20], [4], [3], [10], [], [6], [21], [4], [8], [5], [3], [4], [13], [2], [], [1], [3], [12], [2], [1], [1], [], [5], [18], [18], [2], [7], [3], [1], [], [13], [4], [1], [16], [5], [9], [], [7], [6], [11], [3], [12], [5], [1], [3], [], [1], [2], [11], [1], [4], [3], [4], [13], [4], [3], [15], [], [4], [11], [], [5], [9], [7], [6], [10], [6], [9], [4], [7], [1], []]
这实际上是什么意思?为什么条目这么多?我看到Keras
像上面这样分割上面的文本时有16个单词:
{'oxidative', 'contrast', '6', 'affects', 'in', 'dehydrogenase', 'visual', 'stress', 'glucose', 'phosphate', 'along', 'activity', 'with', 'alcoholics', 'decreased', 'sensitivity'}
顺便说一句,这对我的情况来说是错误的,因为我想防止glucose-6-phosphate
分裂,但是我认为我可以使用以下方法来防止这种情况:
tokenizer = Tokenizer(num_words=max_words, filters='!"#$%&()*+,./:;<=>?@[\\]^_`{|}~\t\n')
答案 0 :(得分:2)
之所以会这样,是因为Tokenizer
建立了字符字典而不是单词字典。字典将如下所示:
{'s': 1, 'e': 2, 't': 3, 'i': 4, 'a': 5, 'o': 6, 'c': 7, 'd': 8, 'l': 9, 'h': 10, 'n': 11, 'r': 12, 'v': 13, 'g': 14, 'y': 15, 'u': 16, 'p': 17, 'f': 18, '6': 19, 'w': 20, 'x': 21}
Tokenizer
将列表而不是字符串作为输入。这样做:
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.text import text_to_word_sequence
max_words = 10000
text = 'Decreased glucose-6-phosphate dehydrogenase activity along with oxidative stress affects visual contrast sensitivity in alcoholics.'
text = text_to_word_sequence(text)
tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(text)
sequences = tokenizer.texts_to_sequences(text)
print(sequences)
这是您的字典现在的样子:
{'decreased': 1, 'glucose': 2, '6': 3, 'phosphate': 4, 'dehydrogenase': 5, 'activity': 6, 'along': 7, 'with': 8, 'oxidative': 9, 'stress': 10, 'affects': 11, 'visual': 12, 'contrast': 13, 'sensitivity': 14, 'in': 15, 'alcoholics': 16}
答案 1 :(得分:2)
tokenizer.fit_on_texts
需要一个文本列表,您将在其中传递单个字符串。 tokenizer.texts_to_sequences()
也是如此。尝试将列表传递给这两种方法:
from keras.preprocessing.text import Tokenizer
max_words = 10000
text = 'Decreased glucose-6-phosphate dehydrogenase ...'
tokenizer = Tokenizer(num_words=max_words, filters='!"#$%&()*+,./:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts([text])
sequences = tokenizer.texts_to_sequences([text])
这将为您提供一个整数序列列表,该整数序列对句子中的单词进行编码,这可能是您的用例:
sequences
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]]