BeautifulSoup:抓取和迭代

时间:2014-11-02 21:48:53

标签: python web-scraping beautifulsoup iteration

>>> soup = BeautifulSoup(html)  
>>> om = soup.find_all('td', {'class': 'rec_title_ppnlist'})  
>>> om
[<td class="rec_title_ppnlist">  
<div><a class=" link_gen " href="SHW?FRST=1">Wambold von Umstadt, Anselm Kasimir, 1583-1647 (Zeit, Lebensdaten)</a></div>  
<div><span>Theologia Germanica : Libellus Aureus Hoc Est Brevis Et Praegnans Quomodo Sit Exuendus Vetus Homo Induendusque Novus</span></div>  
<div><span>Lipsiae : Walther, 1630 [i.e. 1730]</span></div>  
<div class="rec_sep"><img alt="" src="http://gsowww.gbv.de/images/gui/empty.gif" title="" border="" height="1" width="1"></div>

我需要通过bs4.element.ResultSethref="SHW?FRST=1(aprox)迭代此25000。我有两个大问题:

  1. 查找om只给出了前10条记录。
  2. 我需要建立一个包含&#39;信息的文件。在搜索中被抓取(例如Wambold von Umstadt, Anselm Kasimir, 1583-1647 (Zeit, Lebensdaten))
  3. 由于某种原因,我一直无法使用Scrapy。我相信我可以在BeautifulSoup中找到解决方案。

2 个答案:

答案 0 :(得分:0)

尝试使用:

soup.find_all("td", class_="rec_title_ppnlist")

并查看它是否修复了您的计数问题。

对于第二个问题,请对om列表中的每个元素使用om.get_text()

答案 1 :(得分:0)

怎么样?
from bs4 import BeautifulSoup
from urlparse import urlparse, parse_qs

html = '''
<html>
<body>
<table>
<tr>
<td class="rec_title_ppnlist">  
<div><a class=" link_gen " href="SHW?FRST=0">Wambold von Umstadt, Anselm Kasimir, 1583-1647 (Zeit, Lebensdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=1">Wambold von Umstadt, Anselm Kasimir, 1583-1647 (Zeit, Lebensdaten)</a></div>  
<div class="rec_sep"><img alt="" border="" height="1" src="http://gsowww.gbv.de/images/gui/empty.gif" title="" width="1"/></div>  
</td>,  
<td class="rec_title_ppnlist">  
<div><a class=" link_gen " href="SHW?FRST=2">Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=2">Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div class="rec_sep"><img alt="" border="" height="1" src="http://gsowww.gbv.de/images/gui/empty.gif" title="" width="1"/></div>.
<div><a class=" link_gen " href="SHW?FRST=3">Wambold von Umstadt, Anselm Kasimir, 1583-1647 (Zeit, Lebensdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=4">4Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=5">5Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=6">6Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=7">7Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=8">8Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=9">9Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=10">10Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=11">11Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=12">12Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=13">13Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=25000">13Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  
<div><a class=" link_gen " href="SHW?FRST=25001">13Vomelius, Cyprianus, 1535-1587 (Zeit, Wirkungsdaten)</a></div>  

</tr>
</table>
</html>
'''

soup = BeautifulSoup(html)  
tdefs = soup.find_all('td', {'class': 'rec_title_ppnlist'})

with open('data.txt', 'w') as outfile:
    for tdef in tdefs:
        links = tdef.find_all('a', {'class': 'link_gen'})
        for link in links:
            url = urlparse(link['href'])
            vals = url.query.split('=')
            if vals[0] == 'FRST':
                if(1 <= int(vals[1]) <= 25000):
                    print "%s %s" % (vals[1], link.get_text())
                    outfile.write(link.get_text() + '\n')

我确定它读取查询字符串的部分可以更好(urlparse.parse_qs返回列表的字典,对我来说似乎很奇怪)。

此代码不检查输入数据的有效性(例如,链接是否具有href属性),但它提供了有关如何进行解析的想法。