Python:解析html并生成表格文本文件

时间:2017-05-11 10:53:59

标签: python html beautifulsoup html-parsing text-files

问题:我想解析一个HTML代码并检索一个表格文本文件,例如:

East Counties
Babergh, http://ratings.food.gov.uk/OpenDataFiles/FHRS297en-GB.xml, 876
Basildon, http://ratings.food.gov.uk/OpenDataFiles/FHRS109en-GB.xml, 1134
...
...

取而代之的是 txt文件中只显示East Counties,因此for循环无法打印每个新区域。尝试代码在html代码之后。

HTML代码: 代码可以在this html page中找到,其中摘录参考上表:

<h2>
                                    East Counties</h2>

                                        <table>
                                            <thead>
                                                <tr>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleLAName_0">Local authority</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleUpdate_0">Last update</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleEstablishments_0">Number of businesses</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleCulture_0">Download</span>
                                                    </th>
                                                </tr>
                                            </thead>

                                        <tr>
                                            <td>
                                                <span id="listRegions_lvFiles_0_laNameLabel_0">Babergh</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_updatedLabel_0">04/05/2017 </span>
                                                at
                                                <span id="listRegions_lvFiles_0_updatedTime_0"> 12:00</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_establishmentsLabel_0">876</span>
                                            </td>
                                            <td>
                                                <a id="listRegions_lvFiles_0_fileURLLabel_0" title="Babergh: English language" href="http://ratings.food.gov.uk/OpenDataFiles/FHRS297en-GB.xml">English language</a>
                                            </td>
                                        </tr>

                                        <tr>
                                            <td>
                                                <span id="listRegions_lvFiles_0_laNameLabel_1">Basildon</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_updatedLabel_1">06/05/2017 </span>
                                                at
                                                <span id="listRegions_lvFiles_0_updatedTime_1"> 12:00</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_establishmentsLabel_1">1,134</span>
                                            </td>
                                            <td>
                                                <a id="listRegions_lvFiles_0_fileURLLabel_1" title="Basildon: English language" href="http://ratings.food.gov.uk/OpenDataFiles/FHRS109en-GB.xml">English language</a>
                                            </td>
                                        </tr>

我的尝试:

from xml.dom import minidom
import urllib2
from bs4 import BeautifulSoup

url='http://ratings.food.gov.uk/open-data/'
f = urllib2.urlopen(url)
mainpage = f.read()
soup = BeautifulSoup(mainpage, 'html.parser')

regions=[]
with open('Regions_and_files.txt', 'w') as f:
    for h2 in soup.find_all('h2')[6:]: #Skip 6 h2 lines 
        region=h2.text.strip() #Get the text of each h2 without the white spaces
        regions.append(str(region))
        f.write(region+'\n')
        for tr in soup.find_all('tr')[1:]: # Skip headers
            tds = tr.find_all('td')
            if len(tds)==0:
                continue
            else:
                a = tr.find_all('a')
                link = str(a)[10:67]
                span = tr.find_all('span')
                places = int(str(span[3].text).replace(',', ''))
                f.write("%s,%s,%s" % \
                              (str(tds[0].text)[1:-1], link, places)+'\n')

我该如何解决这个问题?

1 个答案:

答案 0 :(得分:2)

我不熟悉 Beautiful Soup 库,但从每个h2周期的代码看,您正在遍历所有tr元素该文件。您应该只遍历属于与特定h2元素相关的表的行。

<强>编辑: 快速查看Beautiful Soup docs后,您可以使用.next_sibling,因为h2后面始终跟table,即table = h2.next_sibling.next_sibling(因为第一个兄弟姐妹而被调用两次)是一个包含空格的字符串。然后,您可以从table遍历其所有行。

您获得威尔士重复的原因是因为源中确实存在 重复项。