我一直在网上研究不同来源并尝试了各种方法,但只能找到如何计算独特单词的频率而不是唯一的短语。我到目前为止的代码如下:
import collections
import re
wanted = set(['inflation', 'gold', 'bank'])
cnt = collections.Counter()
words = re.findall('\w+', open('02.2003.BenBernanke.txt').read().lower())
for word in words:
if word in wanted:
cnt [word] += 1
print (cnt)
如果可能的话,我还想计算本文中使用短语“中央银行”和“高通胀”的次数。我感谢您提出的任何建议或指导。
答案 0 :(得分:2)
首先,这就是我将如何生成你所做的cnt
(以减少内存开销)
def findWords(filepath):
with open(filepath) as infile:
for line in infile:
words = re.findall('\w+', line.lower())
yield from words
cnt = collections.Counter(findWords('02.2003.BenBernanke.txt'))
现在,关于短语的问题:
from itertools import tee
phrases = {'central bank', 'high inflation'}
fw1, fw2 = tee(findWords('02.2003.BenBernanke.txt'))
next(fw2)
for w1,w2 in zip(fw1, fw2)):
phrase = ' '.join([w1, w2])
if phrase in phrases:
cnt[phrase] += 1
希望这有帮助
答案 1 :(得分:0)
假设文件不大 - 这是最简单的方法
for w1, w2 in zip(words, words[1:]):
phrase = w1 + " " + w2
if phrase in wanted:
cnt[phrase] += 1
print(cnt)
答案 2 :(得分:0)
计算小文件中几个短语的文字出现次数:
with open("input_text.txt") as file:
text = file.read()
n = text.count("high inflation rate")
有nltk.collocations
模块提供工具来识别经常连续出现的单词:
import nltk
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.collocations import BigramCollocationFinder, TrigramCollocationFinder
# run nltk.download() if there are files missing
words = [word.casefold() for sentence in sent_tokenize(text)
for word in word_tokenize(sentence)]
words_fd = nltk.FreqDist(words)
bigram_fd = nltk.FreqDist(nltk.bigrams(words))
finder = BigramCollocationFinder(word_fd, bigram_fd)
bigram_measures = nltk.collocations.BigramAssocMeasures()
print(finder.nbest(bigram_measures.pmi, 5))
print(finder.score_ngrams(bigram_measures.raw_freq))
# finder can be constructed from words directly
finder = TrigramCollocationFinder.from_words(words)
# filter words
finder.apply_word_filter(lambda w: w not in wanted)
# top n results
trigram_measures = nltk.collocations.TrigramAssocMeasures()
print(sorted(finder.nbest(trigram_measures.raw_freq, 2)))