我正在使用停用词和句子标记符,但是当我打印过滤后的句子给我结果包括停用词。问题是它不会忽略输出中的停用词。如何删除句子标记器中的停用词?
userinput1 = input ("Enter file name:")
myfile1 = open(userinput1).read()
stop_words = set(stopwords.words("english"))
word1 = nltk.sent_tokenize(myfile1)
filtration_sentence = []
for w in word1:
word = sent_tokenize(myfile1)
filtered_sentence = [w for w in word if not w in stop_words]
print(filtered_sentence)
userinput2 = input ("Enter file name:")
myfile2 = open(userinput2).read()
stop_words = set(stopwords.words("english"))
word2 = nltk.sent_tokenize(myfile2)
filtration_sentence = []
for w in word2:
word = sent_tokenize(myfile2)
filtered_sentence = [w for w in word if not w in stop_words]
print(filtered_sentence)
stemmer = nltk.stem.porter.PorterStemmer()
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)
def stem_tokens(tokens):
return [stemmer.stem(item) for item in tokens]
'''remove punctuation, lowercase, stem'''
def normalize(text):
return stem_tokens(nltk.sent_tokenize(text.lower().translate(remove_punctuation_map)))
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')
def cosine_sim(myfile1, myfile2):
tfidf = vectorizer.fit_transform([myfile1, myfile2])
return ((tfidf * tfidf.T).A)[0,1]
print(cosine_sim(myfile1,myfile2))
答案 0 :(得分:1)
我认为你不能直接从句子中删除stopwords
。您必须先将句子中的每个单词拆分出来,或者使用nltk.word_tokenize
来分割您的句子。对于每个单词,您检查它是否在停用词列表中。这是一个例子:
import nltk
from nltk.corpus import stopwords
stopwords_en = set(stopwords.words('english'))
sents = nltk.sent_tokenize("This is an example sentence. We will remove stop words from this")
sents_rm_stopwords = []
for sent in sents:
sents_rm_stopwords.append(' '.join(w for w in nltk.word_tokenize(sent) if w.lower() not in stopwords_en))
<强>输出强>
['example sentence .', 'remove stop words']
注意您还可以使用string.punctuation
删除标点符号。
import string
stopwords_punctuation = stopwords_en.union(string.punctuation) # merge set together