我正在使用Python34。 我想从CSV文件中获取单词的频率,但它显示错误。 这是我的代码。任何人都可以帮我解决这个问题。
from textblob import TextBlob as tb
import math
words={}
def tfidf(word, blob, bloblist):
return tf(word, blob) * idf(word, bloblist)
def tf(word, blob):
return blob.words.count(word) / len(blob.words)
def n_containing(word, bloblist):
return sum(1 for blob in bloblist if word in blob)
def idf(word, bloblist):
return math.log(len(bloblist) / (1 + n_containing(words, bloblist)))
bloblist = open('afterstopwords.csv', 'r').read()
for i, blob in enumerate(bloblist):
print("Top words in document {}".format(i + 1))
scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True)
for word, score in sorted_words[:3]:
print("\tWord: {}, TF-IDF: {}".format(word, round(score, 5)))
错误是:
Top words in document 1
Traceback (most recent call last):
File "D:\Python34\tfidf.py", line 45, in <module>
scores = {word: tfidf(word, blob, bloblist) for word in blob.words}
AttributeError: 'str' object has no attribute 'words'
答案 0 :(得分:3)
来自http://stevenloria.com/finding-important-words-in-a-document-using-tf-idf/ bloblist的一些代码是:
bloblist = [document1, document2, document3]
不要改变它。另外,在它之前是文件的代码,如:
document1 = tb("""blablabla""")
这就是我做的...我在python中使用一个函数来打开文件,其中openfile保存文件的详细信息。
txt =openfile()
document1=tb(txt)
bloblist = [document1]
原始代码的其余部分保持不变。这工作但我只能得到它来完成小文件。大文件需要太长时间。而且它看起来并不准确。对于字数统计,我使用https://rmtheis.wordpress.com/2012/09/26/count-word-frequency-with-python/
它已经很快地工作了9999行,每行50-75个字符。看似准确,结果似乎等同于wordcloud结果。