我正在自学python,并已完成了基本的文本摘要器。我对摘要文本几乎满意,但想进一步完善最终产品。
该代码可以正确执行一些标准的文本处理(标记,删除停用词等)。然后,代码根据加权单词频率对每个句子评分。我正在使用heapq.nlargest()方法返回前7个句子,根据我的示例文本,我觉得这做得很好。
我面临的问题是,返回的前7个句子从最高得分->最低得分开始排序。我知道为什么为什么。我希望保持与原文相同的句子顺序。我已经包含了一些相关的代码,希望有人可以指导我解决问题。
#remove all stopwords from text, build clean list of lower case words
clean_data = []
for word in tokens:
if str(word).lower() not in stoplist:
clean_data.append(word.lower())
#build dictionary of all words with frequency counts: {key:value = word:count}
word_frequencies = {}
for word in clean_data:
if word not in word_frequencies.keys():
word_frequencies[word] = 1
else:
word_frequencies[word] += 1
#print(word_frequencies.items())
#update the dictionary with a weighted frequency
maximum_frequency = max(word_frequencies.values())
#print(maximum_frequency)
for word in word_frequencies.keys():
word_frequencies[word] = (word_frequencies[word]/maximum_frequency)
#print(word_frequencies.items())
#iterate through each sentence and combine the weighted score of the underlying word
sentence_scores = {}
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if word in word_frequencies.keys():
if len(sent.split(' ')) < 30:
if sent not in sentence_scores.keys():
sentence_scores[sent] = word_frequencies[word]
else:
sentence_scores[sent] += word_frequencies[word]
#print(sentence_scores.items())
summary_sentences = heapq.nlargest(7, sentence_scores, key = sentence_scores.get)
summary = ' '.join(summary_sentences)
print(summary)
我正在使用以下文章进行测试:https://www.bbc.com/news/world-australia-45674716
当前输出:“澳大利亚银行查询:'他们不在乎受伤的人' 调查还听取了有关公司欺诈,银行贿赂,欺骗监管机构的行为以及鲁practices行为的证词。今年,皇家调查委员会是该国最高级别的公开调查活动,揭露了该行业普遍存在的不法行为。皇家委员会是继澳大利亚最大的行业-澳大利亚金融业十多年的丑闻之后成立的。财政部长乔什·弗莱登伯格(Josh Frydenberg)说:“ [报告]为我们金融部门的不良行为提供了一个非常亮的亮点。”他说:“当发现不当行为时,要么不予惩处,要么后果不符合所采取行动的严重性。”失去一切的银行客户 他还批评了所谓的监管机构对银行和金融公司的不当行为。它还收到了银行,财务顾问,养老基金和保险公司提交的9,300多份涉嫌不当行为的呈文。”
作为所需产出的一个例子:上面的第三句话,“今年皇家调查委员会是该国最高级别的公开调查,已暴露出该行业普遍存在的不法行为。”实际上是在原始文章中的“澳大利亚银行查询:他们不在乎谁伤害了”之前,我希望输出保持句子的顺序。
答案 0 :(得分:0)
让它起作用,以防其他人好奇:
#iterate through each sentence and combine the weighted score of the underlying word
sentence_scores = {}
cnt = 0
for sent in sentence_list:
sentence_scores[sent] = []
score = 0
for word in nltk.word_tokenize(sent.lower()):
if word in word_frequencies.keys():
if len(sent.split(' ')) < 30:
if sent not in sentence_scores.keys():
score = word_frequencies[word]
else:
score += word_frequencies[word]
sentence_scores[sent].append(score)
sentence_scores[sent].append(cnt)
cnt = cnt + 1
#Sort the dictionary using the score in descending order and then index in ascending order
#Getting the top 7 sentences
#Putting them in 1 string variable
from operator import itemgetter
top7 = dict(sorted(sentence_scores.items(), key=itemgetter(1), reverse = True)[0:7])
#print(top7)
def Sort(sub_li):
return(sorted(sub_li, key = lambda sub_li: sub_li[1]))
sentence_summary = Sort(top7.values())
summary = ""
for value in sentence_summary:
for key in top7:
if top7[key] == value:
summary = summary + key
print(summary)