Beautiful Soup不会从HTML对象返回全表

时间:2016-07-14 04:58:39

标签: python web-scraping beautifulsoup html-parsing python-requests

我有这个网页:http://waterdata.usgs.gov/nwis/wys_rpt?dd_parm_cds=002_00060&wys_water_yr=2015&site_no=06935965

我希望从以下信息中删除这些信息:

enter image description here

此处的信息存储在表格中,显示标题展开后的前两行:

enter image description here

所以我设置并学习一点BeautifulSoup,并找到我的桌子(它是页面上的最后一个,因此是tables[-1])但它不会拿起整个桌子 - 它会在'年度之后停止总'行名/条目。

代码:

from bs4 import BeautifulSoup
import requests


base_url = 'http://waterdata.usgs.gov/nwis/wys_rpt?dd_parm_cds=002_00060&wys_water_yr=2015&site_no='
site = '06935965'

url = base_url + site
r = requests.get(url)

soup = BeautifulSoup(r.text,'html.parser')
tables = soup.find_all('table')

table = tables[-1]

print(table.text)

输出:

SUMMARY STATISTICS



Water Year 2015
Water Years 2000 - 2015




Annual total

就是这样!整个表格由请求呼叫接收:

<table class='tables'>
<caption class='table_caption'>SUMMARY STATISTICS</caption>
<thead class='thead'>
<tr>
<th class='tables_th'></th>
<th class='tables_th' colspan='2'>Water Year 2015</th>
<th class='tables_th' colspan='2'>Water Years 2000 - 2015</th>
</tr>
</thead>
<tbody>
<tr>
<td class='tables_date'>Annual total</th>
<td>41,170,000<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td><span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>Annual mean</th>
<td>112,800<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>87,520<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>Highest annual mean</th>
<td><span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>154,900<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>2010</td>
</tr>
<tr>
<td class='tables_date'>Lowest annual mean</th>
<td><span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>42,090<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>2006</td>
</tr>
<tr>
<td class='tables_date'>Highest daily mean</th>
<td>342,000<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 20</td>
<td>398,000<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 02, 2013</td>
</tr>
<tr>
<td class='tables_date'>Lowest daily mean</th>
<td>40,600<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jan 19</td>
<td>22,900<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jan 26, 2003</td>
</tr>
<tr>
<td class='tables_date'>Annual 7-day minimum</th>
<td>41,410.0<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jan 18</td>
<td>23,630.0<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jan 24, 2003</td>
</tr>
<tr>
<td class='tables_date'>Maximum peak flow</th>
<td>344,000<span class='padding'><sup>a</sup>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 20</td>
<td>409,000<span class='padding'><sup>a</sup>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 02, 2013</td>
</tr>
<tr>
<td class='tables_date'>Maximum peak stage</th>
<td>31.76<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 20</td>
<td>33.80<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td>Jun 02, 2013</td>
</tr>
<tr>
<td class='tables_date'>Annual runoff (cfsm)</th>
<td>0.215<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>0.165<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>Annual runoff (inches)</th>
<td>2.92<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>2.25<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>10 percent exceeds</th>
<td>249,000<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>173,000<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>50 percent exceeds</th>
<td>76,800<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>63,100<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
<tr>
<td class='tables_date'>90 percent exceeds</th>
<td>50,500<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
<td>35,700<span class='padding'>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span></td>
<td></td>
</tr>
</tbody>
</table>

有人能看出为什么Beautiful Soup省略了桌子的其他部分吗?

1 个答案:

答案 0 :(得分:3)

你需要change the parser更宽松的一个:

soup = BeautifulSoup(r.text, 'html5lib')

lxml也会处理此案例:

soup = BeautifulSoup(r.text, 'lxml')