我有一个非常大的.csv
文件(1065行x 1列)。每行都有句子。我想从每行的wordlist(.csv文件)中获取几个重要的单词,然后为每一行创建数据项频率。
答案 0 :(得分:0)
我刚试过放下一些东西,希望这会对你有所帮助。它可能会更有效地完成,但它可以完成这项工作。
输入文件示例
bla bla bla. bla! bla bla apple!, :banana. apple!!!
banana bla bla, apple and banana
peach 12345 bla bla peach and banana, peach, banana! :apple
代码
# Your inputs
list_words = ['apple', 'banana','peach']
filename = 'example.txt'
# Set of characters to remove to tokenize the file's line
rm = ",:;?/-!."
# Read through the file per each line and do the math
with open(filename,'r') as fin:
for count_line, line in enumerate(fin,1):
clean_line = filter(lambda x: not (x in rm), line)
# To hold the counts of each word
words_frequency = {key: 0 for key in list_words}
for w in clean_line.split():
if w in list_words:
words_frequency[w] += 1
print 'Line', count_line,':', words_frequen
输出:
Line 1 : {'apple': 2, 'peach': 0, 'banana': 1}
Line 2 : {'apple': 1, 'peach': 0, 'banana': 2}
Line 3 : {'apple': 1, 'peach': 3, 'banana': 2}