我正在解析一长串文本并计算Python中每个单词出现的次数。我有一个功能,但我正在寻找建议是否有方法可以使它更有效(在速度方面)以及是否有甚至python库函数可以为我这样做所以我不是重新发明轮子?
您能否建议一种更有效的方法来计算长字符串中最常见的单词(通常在字符串中超过1000个单词)?
还有什么最好的方法将字典排序到第1个元素是最常用字的列表中,第2个元素是第2个最常用的字等等?
test = """abc def-ghi jkl abc
abc"""
def calculate_word_frequency(s):
# Post: return a list of words ordered from the most
# frequent to the least frequent
words = s.split()
freq = {}
for word in words:
if freq.has_key(word):
freq[word] += 1
else:
freq[word] = 1
return sort(freq)
def sort(d):
# Post: sort dictionary d into list of words ordered
# from highest freq to lowest freq
# eg: For {"the": 3, "a": 9, "abc": 2} should be
# sorted into the following list ["a","the","abc"]
#I have never used lambda's so I'm not sure this is correct
return d.sort(cmp = lambda x,y: cmp(d[x],d[y]))
print calculate_word_frequency(test)
答案 0 :(得分:26)
>>> from collections import Counter
>>> test = 'abc def abc def zzz zzz'
>>> Counter(test.split()).most_common()
[('abc', 2), ('zzz', 2), ('def', 2)]
答案 1 :(得分:3)
>>>> test = """abc def-ghi jkl abc
abc"""
>>> from collections import Counter
>>> words = Counter()
>>> words.update(test.split()) # Update counter with words
>>> words.most_common() # Print list with most common to least common
[('abc', 3), ('jkl', 1), ('def-ghi', 1)]
答案 2 :(得分:2)
您还可以使用NLTK
(自然语言工具包)。它提供了非常好的库来研究处理文本。
对于此示例,您可以使用:
from nltk import FreqDist
text = "aa bb cc aa bb"
fdist1 = FreqDist(text)
# show most 10 frequent word in the text
print fdist1.most_common(10)
结果将是:
[('aa', 2), ('bb', 2), ('cc', 1)]
答案 3 :(得分:0)
如果你想显示常用词和计数值而不是List, 那么这是我的代码。
from collections import Counter
str = 'abc def ghi def abc abc'
arr = Counter(str.split()).most_common()
for word, count in arr:
print(word, count)
输出:
abc 3
def 2
ghi 1