我正在尝试在python3中使用nltk进行URL文本摘要,但是我不确定为什么它显示KeyError。 这是我的代码:
flasexam.py
import bs4 as bs
import urllib.request
import re
import heapq
import nltk
scraped_data = urllib.request.urlopen('https://en.wikipedia.org/wiki/Machine_learning')
article = scraped_data.read()
parsed_article = bs.BeautifulSoup(article,'lxml')
paragraphs = parsed_article.find_all('p')
article_text = ""
for p in paragraphs:
article_text += p.text
article_text = re.sub(r'\[[0-9]*\]', ' ', article_text)
article_text = re.sub(r'\s+', ' ', article_text)
formatted_text = re.sub('[^a-zA-Z]', ' ', article_text)
formatted_text = re.sub(r'\s+', ' ', formatted_text)
sentence_list = nltk.sent_tokenize(article_text)
stopwords = nltk.corpus.stopwords.words('english')
word_freq = {}
for word in nltk.word_tokenize(formatted_text):
if word not in stopwords:
if word not in word_freq.keys():
word_freq[word] = 1
else:
word_freq[word] += 1
max_freq = max(word_freq.values())
for word in word_freq.keys():
word_freq[word] = (word_freq[word]/max_freq)
sentence_scores = {}
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if len(sent.split(' ')) < 30:
if sent not in sentence_scores.keys():
sentence_scores[sent] = word_freq[word]
else:
sentence_scores[sent] += word_freq[word]
summary_sentences = heapq.nlargest(7, sentence_scores, key=sentence_scores.get)
summary = ' '.join(summary_sentences)
print(summary)
在运行此代码时,它显示此错误:
Traceback (most recent call last):
File "flasexam.py", line 46, in <module>
sentence_scores[sent] = word_freq[word]
KeyError: 'it'
我不确定确切的错误是什么以及如何解决该错误。
答案 0 :(得分:0)
将word_freq[word]
替换为:
if word in word_freq:
word_freq[word] ...
答案 1 :(得分:0)
每当请求dict()对象(使用a = adict [key]格式)并且密钥不在字典中时,Python都会引发KeyError。
In [8]: a = {"name":"Daka", "sex":"male"}
In [9]: a["name"]
Out[9]: 'Daka'
In [10]: a["name_bla"]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-10-91fb451c1e47> in <module>()
----> 1 a["name_bla"]
KeyError: 'name_bla'
简单地说-您正在寻找不在字典中的键“ it”。