BeautifulSoup Iteration无法正常工作

时间:2015-07-18 20:37:37

标签: python parsing beautifulsoup

from bs4 import BeautifulSoup
import requests
s=requests.Session()
r=s.get('http://www.virginiaequestrian.com/main.cfm?action=greenpages&GPType=8')
soup=BeautifulSoup(r.text,'html5lib')

DataGrid=soup.find('tbody')
test=[]
for tr in DataGrid.find_all('tr')[:3]:
        for td in tr.find_all('td'):
            print td.string

您好我正在尝试解析此网站的html(http://www.virginiaequestrian.com/main.cfm?action=greenpages&GPType=8)并获取表格数据。我试图从我的结果中排除前三个表行,但由于某种原因,我无法让解析器执行此操作。这是我的第一次专业拼抢尝试,我完全失去了如何使这项工作。我猜它可能与我正在使用的html5lib解析器有关,但老实说我不知道​​。有人可以告诉我如何让这个工作吗?

作为一个很好的测试,将数据拉到前三行是非常有用的。这样我就可以确信完成的查询将从除了这些之外的任何东西中提取。

例如,表中的第一行将是'马术网站'

1 个答案:

答案 0 :(得分:1)

你只是前三个不忽略[:3],它从列表中分割前三个元素

 DataGrid.find_all('tr')[:3] # first three elements

应该是DataGrid.find_all('tr')[3:]#除了前三个元素之外的所有元素

from bs4 import BeautifulSoup
import requests

r=requests.get('http://www.virginiaequestrian.com/main.cfm?action=greenpages&GPType=8')
soup=BeautifulSoup(r.content)

tbl = soup.find("table")
for tag in tbl.find_all("tr")[3:]:
    for td in tag.find_all('td'):
        print td.text

以上tbl.find_all("tr")切片时使用两个不同的解析器输出:

In [20]: soup=BeautifulSoup(r.content,"html.parser")

In [21]: tbl = soup.find("table")

In [22]: len(tbl.find_all("tr"))
Out[22]: 364

In [23]: len(tbl.find_all("tr")[3:])
Out[23]: 361

In [24]: soup=BeautifulSoup(r.content,"lxml")

In [25]: tbl = soup.find("table")

In [26]: len(tbl.find_all("tr")[3:])
Out[26]: 361

In [27]: len(tbl.find_all("tr"))
Out[27]: 364

如果你真的想要more href,那么你应该这样做,为每个a获取tr标签,在你真正想要的那一行之前还有6个tr你需要跳过6:

tbl = soup.find("table")
out = (tag.find('a') for tag in tbl.find_all("tr")[6:])

for a in out:
    print(a["href"])

输出:

main.cfm?action=greenpages&sub=view&ID=9068
main.cfm?action=greenpages&sub=view&ID=9504
main.cfm?action=greenpages&sub=view&ID=10868
main.cfm?action=greenpages&sub=view&ID=10261
main.cfm?action=greenpages&sub=view&ID=10477
main.cfm?action=greenpages&sub=view&ID=10708
main.cfm?action=greenpages&sub=view&ID=11712
main.cfm?action=greenpages&sub=view&ID=12402
main.cfm?action=greenpages&sub=view&ID=12496
..................

要使用链接,只需添加主网址:

for a in out:
    print("http://www.virginiaequestrian.com/{}".format(a["href"]))

输出:

http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=9068
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=9504
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=10868
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=10261
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=10477
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=10708
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=11712
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=12402
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=12496
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=12633
http://www.virginiaequestrian.com/main.cfm?action=greenpages&sub=view&ID=13528

如果您打开第一个将导致您进入马术网站,即您想要的第一个数据。