我有一个会议摘要的数据集,并且有一个字典,其中包含每年单词频率的字典。我想将这本词典词典转换成一个矩阵,将每年与其他年份的频率进行比较,以查看哪些年份彼此最相似。
我把字典做成了熊猫数据框。记住:这是单词和年份。
wordsdf = pd.DataFrame.from_dict(word_dfs, orient='index')
我试图将年份作为列和行以在coocc矩阵中进行比较。但是到目前为止,由于它们不全是整数,所以我不能只使用点积。有什么建议么?
我尝试了一下都没用:
# #to create a co-occurrence matrix
from nltk.tokenize import word_tokenize
from itertools import combinations
from collections import Counter
sentences = wordsdf
vocab = set(word_tokenize(' '.join(str(sentences)))
token_sent_list = [word_tokenize(sen) for sen in sentences]
co_occ = {ii:Counter({jj:0 for jj in vocab if jj!=ii}) for ii in vocab}
k=2
for sen in token_sent_list:
for ii in range(len(sen)):
if ii < k:
c = Counter(sen[0:ii+k+1])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
elif ii > len(sen)-(k+1):
c = Counter(sen[ii-k::])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
else:
c = Counter(sen[ii-k:ii+k+1])
del c[sen[ii]]
co_occ[sen[ii]] = co_occ[sen[ii]] + c
# # Having final matrix in dict form lets you convert it to different python data structures
co_occ = {ii:dict(co_occ[ii]) for ii in vocab}
co_occ