BeautifulSoup页面抓取中的KeyError

时间:2015-12-27 17:50:26

标签: python html python-3.x beautifulsoup html-parsing

我正在编写一个涉及抓取某些固定网站的小应用。在这种情况下,我正在抓取TechCrunch,并且因为我得到KeyError我不应该在那里而被卡住了。

以下是执行抓取的代码部分:

response = urllib.request.urlopen(self.url)
soup = BeautifulSoup(response.read(), "html.parser")

chunks = soup.find_all('li', class_='river-block')
html = 'TechCrunch:'
html += '<ul>'
for c in chunks:
    print(c.attrs.keys())
    print(c.attrs.values())
    html += '<li>'
    html += c.attrs['data-sharetitle']
    html += '<a href="' + c.attrs['data-permalink'] + '">Read more</a>'
    html += '</li>'
    html += '</ul>'

这个想法是链接和标题分别存储在data-permalinkdata-sharetitle属性中。现在,两个打印语句的输出是我所期望的:

dict_keys(['class', 'data-sharetitle', 'id', 'data-shortlink', 'data-permalink'])
dict_values([['river-block', 'crunch-network'], 'Investing In Artificial\xa0Intelligence', '1251865', 'http://tcrn.ch/1mEbmcG', 'http://techcrunch.com/2015/12/25/investing-in-artificial-intelligence/'])

但是,html += c.attrs['data-sharetitle']行给了我KeyError: 'data-sharetitle'。为什么呢?

1 个答案:

答案 0 :(得分:1)

并非每个li类的river-block元素都具有data-sharetitle属性。 强制存在所需属性。替换:

chunks = soup.find_all('li', class_='river-block')

使用:

chunks = soup.find_all('li', {"class": "river-block", 
                              "data-sharetitle": True, 
                              "data-permalink": True})