beautifulsoup和无效的HTML文档

时间:2014-04-30 17:45:36

标签: python html parsing html-parsing beautifulsoup

我正在尝试解析文档http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm。 我想在文件的开头提供国家和地名。

这是我的代码

import urllib
import re
from bs4 import BeautifulSoup
url="http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm"
soup=BeautifulSoup(urllib.urlopen(url))
attendances_table=soup.find("table", {"width":850})
print attendances_table #this works, I see the whole table
print attendances_table.find_all("tr")

我收到以下错误:

AttributeError: 'NoneType' object has no attribute 'next_element'

然后我尝试使用与此帖相同的解决方案(我知道,再次,我:p): beautifulsoup with an invalid html document

我换了一行:

soup=BeautifulSoup(urllib.urlopen(url))

使用:

return BeautifulSoup(html, 'html.parser')

现在,如果我这样做:

print attendances_table

我只得到:

<table border="0" cellpadding="10" cellspacing="0" width="850">
<tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b></p></td></tr></table>

我应该改变什么?

2 个答案:

答案 0 :(得分:6)

使用html5lib作为解析器,它非常宽松:

soup = BeautifulSoup(urllib.urlopen(url), 'html5lib')

您还需要先安装html5lib模块。

演示:

>>> from bs4 import BeautifulSoup
>>> import urllib
>>> url = "http://www.consilium.europa.eu/uedocs/cms_data/docs/pressdata/en/ecofin/acf8e.htm"
>>> soup = BeautifulSoup(urllib.urlopen(url), 'html5lib')
>>> attendances_table = soup.find("table", {"width": 850})
>>> print attendances_table
<table border="0" cellpadding="10" cellspacing="0" width="850">
<tbody><tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b>:</p>
<p>Mr Philippe MAYSTADT</p></td>
<td valign="TOP" width="58%">
<p>Deputy Prime Minister, Minister for Finance and Foreign Trade</p></td>
</tr>
...
<tr><td valign="TOP" width="42%">
<b><u></u></b><u></u><p><u><b>Portugal</b></u>:</p>
<p>Mr António de SOUSA FRANCO</p>
<p>Mr Fernando TEIXEIRA dos SANTOS</p></td>
<td valign="TOP" width="58%">
<p>Minister for Finance</p>
<p>State Secretary for the Treasury and Finance</p></td>
</tr>
</tbody></table>

使find_all('tr')工作的解决方法:

>>> attendances_table = BeautifulSoup(str(attendances_table), 'html5lib')
>>> print attendances_table.find_all("tr")
[<tr><td valign="TOP" width="42%">
<p><b><u>Belgium</u></b>:</p>
<p>Mr Philippe MAYSTADT</p></td>
...
<tr><td valign="TOP" width="42%">
<b><u></u></b><u></u><p><u><b>Portugal</b></u>:</p>
<p>Mr António de SOUSA FRANCO</p>
<p>Mr Fernando TEIXEIRA dos SANTOS</p></td>
<td valign="TOP" width="58%">
<p>Minister for Finance</p>
<p>State Secretary for the Treasury and Finance</p></td>
</tr>]

答案 1 :(得分:2)

解决!

我刚使用了另一个解析器库lxml。 谢谢Martijn Pieters!

soup = BeautifulSoup(urllib.urlopen(url), 'lxml')

lxml是唯一为我工作的图书馆!