表格提取:BeautifulSoup与Pandas.read_html

时间:2018-04-24 01:41:39

标签: python pandas web-scraping html-table beautifulsoup

我有一个从这个link获取的html文件,但我无法使用bs4.BeautifulSoup()和pandas.read_html提取任何类型的表。我知道我所需表格的每一行都以<tr class='odd'>开头。尽管如此,当我通过soup.find({'class': 'odd'})pd.read_html(url, attrs = {'class': 'odd'})时,某些内容无效。错误在哪里或者我该怎么办?

表的开头显然是从requests.get(url).content[8359:]开始的。

<table style="background-color:#FFFEEE; border-width:thin; border-collapse:collapse; border-spacing:0; border-style:outset;" rules="groups" >
<colgroup>
<colgroup>
<colgroup>
<colgroup>
<colgroup span="3">
<colgroup span="3">
<colgroup span="3">
<colgroup span="3">
<colgroup>
<tbody>
<tr style="vertical-align:middle; background-color:#177A9C">
<th scope="col" style="text-align:center">Ion</th>
<th scope="col" style="text-align:center">&nbsp;Observed&nbsp;<br />&nbsp;Wavelength&nbsp;<br />&nbsp;Vac (nm)&nbsp;</th>
<th scope="col" style="text-align:center; white-space:nowrap">&nbsp;<i>g<sub>k</sub>A<sub>ki</sub></i><br />&nbsp;(10<sup>8</sup> s<sup>-1</sup>)&nbsp;</th>
<th scope="col">&nbsp;Acc.&nbsp;</th>
<th scope="col" style="text-align:center; white-space:nowrap">&nbsp;<i>E<sub>i</sub></i>&nbsp;<br />&nbsp;(eV)&nbsp;</th>
<th>&nbsp;</th>
<th scope="col" style="text-align:center; white-space:nowrap">&nbsp;<i>E<sub>k</sub></i>&nbsp;<br />&nbsp;(eV)&nbsp;</th>
<th scope="col" style="text-align:center" colspan="3">&nbsp;Lower Level&nbsp;<br />&nbsp;Conf.,&nbsp;Term,&nbsp;J&nbsp;</th>
<th scope="col" style="text-align:center" colspan="3">&nbsp;Upper Level&nbsp;<br />&nbsp;Conf.,&nbsp;Term,&nbsp;J&nbsp;</th>
<th scope="col" style="text-align:center">&nbsp;<i>g<sub>i</sub></i>&nbsp;</th>
<th scope="col" style="text-align:center">&nbsp;<b>-</b>&nbsp;</th>
<th scope="col" style="text-align:center">&nbsp;<i>g<sub>k</sub></i>&nbsp;</th>
<th scope="col" style="text-align:center">&nbsp;Type&nbsp;</th>
</tr>
</tbody>

<tbody>
<tr>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
<td>&nbsp;</td>
</tr>

<tr class='odd'>

 <td class="lft1"><b>C I</b>&nbsp;</td>
 <td class="fix">            193.090540&nbsp;</td>
 <td class="lft1">1.02e+01&nbsp;</td>
 <td class="lft1">&nbsp;&nbsp;A</td>
 <td class="fix">1.2637284&nbsp;&nbsp;</td>
 <td class="dsh">-&nbsp;</td>
 <td class="fix">7.68476771&nbsp;</td>
 <td class="lft1">&nbsp;2<i>s</i><sup>2</sup>2<i>p</i><sup>2</sup>&nbsp;</td>
 <td class="lft1">&nbsp;<sup>1</sup>D&nbsp;</td>
 <td class="lft1">&nbsp;2&nbsp;</td>
 <td class="lft1">&nbsp;2<i>s</i><sup>2</sup>2<i>p</i>3<i>s</i>&nbsp;</td>
 <td class="lft1">&nbsp;<sup>1</sup>P&deg;&nbsp;</td>
 <td class="lft1">&nbsp;1&nbsp;</td>
 <td class="rgt">&nbsp;5</td>
 <td class="dsh">-</td>
 <td class="lft1">3&nbsp;</td>
 <td class="cnt"><sup></sup><sub></sub></td>

</tr>

2 个答案:

答案 0 :(得分:0)

您需要搜索标签,然后搜索课程。所以使用lxml解析器;

soup = BeautifulSoup(yourdata, 'lxml')

for i in soup.find_all('tr',attrs={'class':"odd"}):
     print(i.text)

从这一点开始,您可以将此数据直接写入文件或生成数组(列表列表 - 您的行),然后放入pandas等等。

答案 1 :(得分:0)

这段代码可以帮助您快速启动这个项目,但是,如果您正在寻找建立整个项目的人,请求数据,抓取,存储,操纵我建议雇用某人或学习如何做它。 HERE是BeautifulSoup文档。

通过(快速入门指南)一次,你几乎可以知道 bs4 上的全部内容。

import requests
from bs4 import BeautifulSoup
from time import sleep


url = 'https://physics.nist.gov/'
second_part = 'cgi-bin/ASD/lines1.pl?spectra=C%20I%2C%20Ti%20I&limits_type=0&low_w=190&upp_w=250&unit=1&de=0&format=0&line_out=0&no_spaces=on&remove_js=on&en_unit=1&output=0&bibrefs=0&page_size=15&show_obs_wl=1&unc_out=0&order_out=0&max_low_enrg=&show_av=2&max_upp_enrg=&tsb_value=0&min_str=&A_out=1&A8=1&max_str=&allowed_out=1&forbid_out=1&min_accur=&min_intens=&conf_out=on&term_out=on&enrg_out=on&J_out=on&g_out=on&submit=Retrieve%20Data%27'


page = requests.get(url+second_part)
soup = BeautifulSoup(page.content, "lxml")

whole_table = soup.find('table', rules='groups')

sub_tbody = whole_table.find_all('tbody')
# the two above lines are used to locate the table and the content    

# we then continue to iterate through sub-categories i.e. tbody-s > tr-s > td-s 
for tag in sub_tbody:
    if tag.find('tr').find('td'):
        table_rows = tag.find_all('tr')
        for tag2 in table_rows:
            if tag2.has_attr('class'):
                td_tags = tag2.find_all('td')
                print(td_tags[0].text, '<- Is the ion')
                print(td_tags[1].text, '<- Wavelength')
                print(td_tags[2].text, '<- Some formula gk Aki')
                # and so on...
                print('--'*40) # unecessary but does print ----------...

    else:
        pass