计算文本文件中多篇文章中特定单词的频率

时间:2016-11-15 14:14:05

标签: python python-3.x counter word-frequency

我想计算单个文本文件中包含的每篇文章的单词列表的出现次数。 每篇文章都可以识别,因为它们都以一个共同的标记“< p> Advertisement”开头。

这是文本文件的示例:

"[<p>Advertisement ,   By   TIM ARANGO  ,     SABRINA TAVERNISE   and     CEYLAN YEGINSU    JUNE 28, 2016 
 ,Credit Ilhas News Agency, via Agence France-Presse — Getty Images,ISTANBUL ......]
[<p>Advertisement ,   By  MILAN SCHREUER  and     ALISSA J. RUBIN    OCT. 5, 2016 
 ,  BRUSSELS — A man wounded two police officers with a knife in Brussels around noon 
on Wednesday in what the authorities called “a potential terrorist attack.” ,  
The two ......]" 

我想要做的是计算每个单词的频率,我有一个csv文件(20个单词)并写出这样的输出:

  id, attack, war, terrorism, people, killed, said 
  article_1, 45, 5, 4, 6, 2,1
  article_2, 10, 3, 2, 1, 0,0

csv中的单词存储如下:

attack
people
killed
attacks
state
islamic

正如所建议的那样,我首先尝试在开始计算单词之前用标签<p>拆分整个文本文件。然后我在文件文本中对列表进行了标记。

这是我到目前为止所做的:

opener = open("News_words_most_common.csv")
words = opener.read()
my_pattern = ('\w+')
x = re.findall(my_pattern, words)

file_open = open("Training_News_6.csv")
files = file_open.read()
r = files.lower()
stops = set(stopwords.words("english"))
words = r.split("<p>")
token= word_tokenize(words)
string = str(words)
token= word_tokenize(string)
print(token)

这是输出:

['[', "'", "''", '|', '[', "'", ',', "'advertisement", 
',', 'by', 'milan', 'schreuer'.....']', '|', "''", '\\n', "'", ']']

下一步将循环分割的文章(现在转换为单词标记列表)并计算第一个文件中单词的频率。如果您对如何进行交互和计数有任何建议,请告诉我们!

我在Anaconda上使用Python 3.5

4 个答案:

答案 0 :(得分:1)

你可以尝试阅读你的文本文件,然后在'<p>'分割(如果,如你所说,它们用于标记新文章的开头),然后你有一个文章列表。一个带计数的简单循环就行了。

我建议你看看nltk模块。我不确定你的最终目标是什么,但是nltk很容易实现函数来完成这些事情以及更多(例如,不只是查看每篇文章中出现的单词的次数,你可以计算频率,甚至按逆文档频率进行缩放,称为tf-idf)。

答案 1 :(得分:1)

您可以尝试使用pandassklearn

import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer

vocabulary = [word.strip() for word in open('vocabulary.txt').readlines()]
corpus = open('articles.txt').read().split('<p>Advertisement')

vectorizer = CountVectorizer(min_df=1, vocabulary=vocabulary)
words_matrix = vectorizer.fit_transform(corpus)
df = pd.DataFrame(data=words_matrix.todense(), 
                  index=('article_%s' % i for i in range(words_matrix.shape[0])),
                  columns=vectorizer.get_feature_names())
df.index.name = 'id'
df.to_csv('articles.csv')

在档案articles.csv中:

$ cat articles.csv
id,attack,people,killed,attacks,state,islamic
article_0,0,0,0,0,0,0
article_1,0,0,0,0,0,0
article_2,1,0,0,0,0,0

答案 2 :(得分:0)

也许我没有完成任务......

如果要进行文本分类,可以使用标准的scikit矢量化器,例如Bag of Words,它可以获取文本并返回带有单词的数组。如果你真的需要csv,你可以直接在分类器中使用它或输出到csv。 它已经包含在scikit和Anaconda中。

Anoter方式 - 是手动拆分。 你可以加载数据,分成单词,计算它们,排除停用词(它是什么?)并输入到输出结果文件中。像:

    import re
    from collections import Counter
    txt = open('file.txt', 'r').read()
    words = re.findall('[a-z]+', txt, re.I)
    cnt = Counter(_ for _ in words if _ not in stopwords)

答案 3 :(得分:0)

这个怎么样:

import re
from collections import Counter
csv_data = [["'", "\\n", ","], ['fox'],
            ['the', 'fox', 'jumped'],
            ['over', 'the', 'fence'],
            ['fox'], ['fence']]
key_words = ['over', 'fox']
words_list = []

for i in csv_data:
    for j in i:
        line_of_words = ",".join(re.findall("[a-zA-Z]+", j))
        words_list.append(line_of_words)
word_count = Counter(words_list)

match_dict = {}
for aword, word_freq in zip(word_count.keys(), word_count.items()):
    if aword in key_words:
        match_dict[aword] = word_freq[1]

结果是:

print('Article words: ', words_list)
print('Article Word Count: ', word_count)
print('Matches: ', match_dict)

Article words:  ['', 'n', '', 'fox', 'the', 'fox', 'jumped', 'over', 'the', 'fence', 'fox', 'fence']
Article Word Count:  Counter({'fox': 3, '': 2, 'the': 2, 'fence': 2, 'n': 1, 'over': 1, 'jumped': 1})
Matches:  {'over': 1, 'fox': 3}