我正在尝试获取数量庞大(〜160.000)的文档的术语计数的稀疏矩阵。
我清理了文本并想遍历所有文档(即一次计数向量化一个,并附加结果1xN数组。以下代码适用于逐字大小写,但不适用于双字组:
cv1 = sklearn.feature_extraction.text.CountVectorizer(stop_words=None,vocabulary=dictionary1)
cv2 = sklearn.feature_extraction.text.CountVectorizer(stop_words=None,vocabulary=dictionary2)
for row in range(start,end+1):
report_name = fund_reports_table.loc[row, "report_names"]
raw_report = open("F:/EDGAR_ShareholderReports/" + report_name, 'r', encoding="utf8").read()
## word for word
temp = cv1.fit_transform([raw_report]).toarray()
res1 = np.concatenate((res1,temp),axis=0)
## big grams
bigram=set()
sentences = raw_report.split(".")
for line in sentences:
token = nltk.word_tokenize(line)
bigram = bigram.union(set(list(ngrams(token, 2))) )
temp = cv2.fit_transform(list(bigram)).toarray()
res2=np.concatenate((res2,temp),axis=0)
Python返回
"AttributeError: 'tuple' object has no attribute 'lower'"
大概是因为我将数据馈送到bigram vectorizecounter的方式无效。
“ raw_report”是一个字符串。单个单词的词典是:
dictionary1 =['word1', 'words2',...]
dictionary2类似,但基于通过合并所有文档的所有双字母(并保持唯一的值,如前所述)而构造的双字母,以使结果结构为
dictionary2 =[('word1','word2'),('wordn','wordm'),...]
文档bigram具有相同的结构,这就是为什么我困惑为什么python不接受输入的原因。有没有办法解决这个问题,或者我的整个方法不是很pythonic并开始适得其反?
在此先感谢您的帮助!
备注:我了解我可以用更复杂的CountVectorize命令(例如,清洗,标记化和一步计算)来完成整个过程,但是我更希望自己能够做到这一点(以便查看和存储)中间输出)。另外,由于我使用了大量文本,因此我担心会遇到内存问题。
答案 0 :(得分:1)
您的问题来自字典2基于元组的事实。这是一个极简主义的示例,它表明当二元字母为字符串时,此方法有效。如果要分别处理每个文件,可以将其作为列表传递到vectorizer.transform()。
from sklearn.feature_extraction.text import CountVectorizer
Doc1 = 'Wimbledon is one of the four Grand Slam tennis tournaments, the others being the Australian Open, the French Open and the US Open.'
Doc2 = 'Since the Australian Open shifted to hardcourt in 1988, Wimbledon is the only major still played on grass'
doc_set = [Doc1, Doc2]
my_vocabulary= ['Grand Slam', 'Australian Open', 'French Open', 'US Open']
vectorizer = CountVectorizer(ngram_range=(2, 2))
vectorizer.fit_transform(my_vocabulary)
term_count = vectorizer.transform(doc_set)
# Show the index key for each bigram
vectorizer.vocabulary_
Out[11]: {'grand slam': 2, 'australian open': 0, 'french open': 1, 'us open': 3}
# Sparse matrix of bigram counts - each row corresponds to a document
term_count.toarray()
Out[12]:
array([[1, 1, 1, 1],
[1, 0, 0, 0]], dtype=int64)
您可以使用列表推导来修改字典2。
dictionary2 = [('Grand', 'Slam'), ('Australian', 'Open'), ('French', 'Open'), ('US', 'Open')]
dictionary2 = [' '.join(tup) for tup in dictionary2]
dictionary2
Out[26]: ['Grand Slam', 'Australian Open', 'French Open', 'US Open']
编辑:基于以上内容,我认为您可以使用以下代码:
from sklearn.feature_extraction.text import CountVectorizer
# Modify dictionary2 to be compatible with CountVectorizer
dictionary2_cv = [' '.join(tup) for tup in dictionary2]
# Initialize and train CountVectorizer
cv2 = CountVectorizer(ngram_range=(2, 2))
cv2.fit_transform(dictionary2_cv)
for row in range(start,end+1):
report_name = fund_reports_table.loc[row, "report_names"]
raw_report = open("F:/EDGAR_ShareholderReports/" + report_name, 'r', encoding="utf8").read()
## word for word
temp = cv1.fit_transform([raw_report]).toarray()
res1 = np.concatenate((res1,temp),axis=0)
## big grams
bigram=set()
sentences = raw_report.split(".")
for line in sentences:
token = nltk.word_tokenize(line)
bigram = bigram.union(set(list(ngrams(token, 2))) )
# Modify bigram to be compatible with CountVectorizer
bigram = [' '.join(tup) for tup in bigram]
# Note you must not fit_transform here - only transform using the trained cv2
temp = cv2.transform(list(bigram)).toarray()
res2=np.concatenate((res2,temp),axis=0)