如何从NLTK扩展禁用词列表并删除带有扩展列表的停用词?

时间:2015-03-26 09:34:48

标签: python nlp nltk stop-words

我尝试了两种删除停用词的方法,这两种方法都遇到了问题:

方法1:

cachedStopWords = stopwords.words("english")
words_to_remove = """with some your just have from it's /via & that they your there this into providing would can't"""
remove = tu.removal_set(words_to_remove, query)
remove2 = tu.removal_set(cachedStopWords, query)

在这种情况下,只有第一个删除功能有效。 remove2不起作用。

方法2:

lines = tu.lines_cleanup([sentence for sentence in sentence_list], remove=remove)
words = '\n'.join(lines).split()
print words # list of words

输出看起来像这个["Hello", "Good", "day"]

我尝试从单词中删除停用词。这是我的代码:

for word in words:
    if word in cachedStopwords:
        continue
    else:
        new_words='\n'.join(word)

print new_words

输出如下:

H
e
l
l
o

无法弄清楚上述两种方法有什么问题。请指教。

4 个答案:

答案 0 :(得分:3)

使用它来增加停用词列表:

await Task.Run(() => {
    Parallel.ForEach(requests, (request) => {
        var response = CallWebService(request);
        response.Order = request.Order;
        results.Add(response);
    });
});

输出:

179

184

答案 1 :(得分:1)

我认为你想要实现的是扩展NLTK的停用词列表。由于NLTK中的停用词保存在单个列表中,您可以简单地执行此操作:

>>> from nltk.corpus import stopwords
>>> stoplist = stopwords.words('english')
>>> stoplist
[u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves', u'you', u'your', u'yours', u'yourself', u'yourselves', u'he', u'him', u'his', u'himself', u'she', u'her', u'hers', u'herself', u'it', u'its', u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what', u'which', u'who', u'whom', u'this', u'that', u'these', u'those', u'am', u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has', u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the', u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of', u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into', u'through', u'during', u'before', u'after', u'above', u'below', u'to', u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under', u'again', u'further', u'then', u'once', u'here', u'there', u'when', u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few', u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not', u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't', u'can', u'will', u'just', u'don', u'should', u'now']
>>> more_stopwords = """with some your just have from it's /via & that they your there this into providing would can't"""
>>> stoplist += more_stopwords.split()
>>> sent = "With some of hacks to your line of code , we can simply extract the data you need ."
>>> sent_with_no_stopwords = [word for word in sent.split() if word not in stoplist]
>>> sent_with_no_stopwords
['With', 'hacks', 'line', 'code', ',', 'simply', 'extract', 'data', 'need', '.']
# Note that the "With" is different from "with".
# So let's try this:
>>> sent_with_no_stopwords = [word for word in sent.lower().split() if word not in stoplist]
>>> sent_with_no_stopwords
['hacks', 'line', 'code', ',', 'simply', 'extract', 'data', 'need', '.']
# To get it back into a string:
>>> new_sent = " ".join(sent_with_no_stopwords)
>>> new_sent
'hacks line code , simply extract data need .'

答案 2 :(得分:0)

您可以更改方法2:

for word in words:
    if word in cachedStopwords:
        continue
    else:
        new_words='\n'.join(word)

print new_words

为:

new_words = []
for word in words:
    if word in stop_words:
        continue
    else:
        new_words.append(word)

print new_words

答案 3 :(得分:-1)

您需要标记字符串:

words = string.split()

是一种简单的方法,尽管nltk还有其他标记器。

然后可能是列表理解:

words = [w for w in words if w not in cachedstopwords]

此:

from nltk.corpus import stopwords

stop_words = stopwords.words("english")
sentence = "You'll want to tokenise your string"

words = sentence.split()
print words
words = [w for w in words if w not in stop_words]
print words

打印:

["You'll", 'want', 'to', 'tokenise', 'your', 'string']
["You'll", 'want', 'tokenise', 'string']