我使用Keras进行多标签分类任务(Kaggle上的毒性评论文本分类)。
我使用Tokenizer
类进行一些预处理,如下所示:
tokenizer = Tokenizer(num_words=10000)
tokenizer.fit_on_texts(train_sentences)
train_sentences_tokenized = tokenizer.texts_to_sequences(train_sentences)
max_len = 250
X_train = pad_sequences(train_sentences_tokenized, maxlen=max_len)
这是一个好的开始,但我还没有删除停用词,词干等等。对于停止删除单词,这里是我在上面做的之前的事情:
def filter_stop_words(train_sentences, stop_words):
for i, sentence in enumerate(train_sentences):
new_sent = [word for word in sentence.split() if word not in stop_words]
train_sentences[i] = ' '.join(new_sent)
return train_sentences
stop_words = set(stopwords.words("english"))
train_sentences = filter_stop_words(train_sentences, stop_words)
难道在Keras中有更简单的方法吗?希望也有阻止能力,但是文档并没有表明存在:
https://keras.io/preprocessing/text/
任何有关停止删除词和阻止词的最佳做法的帮助都会很棒!
谢谢!