我正在学习如何在Scraperwiki中使用Python编写scraper。到目前为止一切都那么好,但我已经花了几天时间摸不着头脑,因为我无法理解这个问题。我试图从表中获取所有链接。它有效,但是从001到486的链接列表中,它只在045开始抓取它们。网址/来源只是一个网站上的城市列表,来源可以在这里看到:
http://www.tripadvisor.co.uk/pages/by_city.html,具体的html从这里开始:
</td></tr>
<tr><td class=dt1><a href="by_city_001.html">'s-Gravenzande, South Holland Province - Aberystwyth, Ceredigion, Wales</a></td>
<td class=dt1><a href="by_city_244.html">Los Corrales de Buelna, Cantabria - Lousada, Porto District, Northern Portugal</a></td>
</tr>
<tr><td class=dt1><a href="by_city_002.html">Abetone, Province of Pistoia, Tuscany - Adamstown, Lancaster County, Pennsylvania</a> /td>
<td class=dt1><a href="by_city_245.html">Louth, Lincolnshire, England - Lucciana, Haute-Corse, Corsica</a></td>
</tr>
<tr><td class=dt1><a href="by_city_003.html">Adamswiller, Bas-Rhin, Alsace - Aghir, Djerba Island, Medenine Governorate</a> </td>
<td class=dt1><a href="by_city_246.html">Luccianna, Haute-Corse, Corsica - Lumellogno, Novara, Province of Novara, Piedmont</a></td>
</tr>
我所追求的是从“by_city_001.html”到“by_city_486.html”的链接。这是我的代码:
def scrapeCityList(pageUrl):
html = scraperwiki.scrape(pageUrl)
root = lxml.html.fromstring(html)
print html
links = root.cssselect('td.dt1 a')
for link in links:
url = 'http://www.tripadvisor.co.uk' + link.attrib['href']
print url
在代码中调用如下:
scrapeCityList('http://www.tripadvisor.co.uk/pages/by_city.html')
现在,当我运行它时,它只返回从0045开始的链接!
输出(045~486)
http://www.tripadvisor.co.ukby_city_045.html
http://www.tripadvisor.co.ukby_city_288.html
http://www.tripadvisor.co.ukby_city_046.html
http://www.tripadvisor.co.ukby_city_289.html
http://www.tripadvisor.co.ukby_city_047.html
http://www.tripadvisor.co.ukby_city_290.html and so on...
我尝试将选择器更改为:
links = root.cssselect('td.dt1')
它像这样抓住了487个'元素':
<Element td at 0x13d75f0>
<Element td at 0x13d7650>
<Element td at 0x13d76b0>
但是我无法从中获得'href'值。当我在cssselect行选择'a'时,我无法弄清楚它为什么会丢失前44个链接。我看过代码,但我不知道。
提前感谢您的帮助!
克莱尔
答案 0 :(得分:1)
您的代码运行正常。你可以在这里看到它:https://scraperwiki.com/scrapers/tripadvisor_cities/
我已添加保存到数据存储区,因此您可以看到它实际上处理了所有链接。
import scraperwiki
import lxml.html
def scrapeCityList(pageUrl):
html = scraperwiki.scrape(pageUrl)
root = lxml.html.fromstring(html)
links = root.cssselect('td.dt1 a')
print len(links)
batch = []
for link in links[1:]: #skip the first link since it's only a link to tripadvisor and not a subpage
record = {}
url = 'http://www.tripadvisor.co.uk/' + link.attrib['href']
record['url'] = url
batch.append(record)
scraperwiki.sqlite.save(["url"],data=batch)
scrapeCityList('http://www.tripadvisor.co.uk/pages/by_city.html')
如果您使用第二个css选择器:
links = root.cssselect('td.dt1')
然后你选择了td元素而不是a元素(它是td的子元素)。您可以通过执行以下操作来选择a:
url = 'http://www.tripadvisor.co.uk/' + link[0].attrib['href']
您选择td的第一个子元素(即[0])。
如果要查看lxml.html中元素的所有属性,请使用:
print element.attrib
对于td给出:
{'class': 'dt1'}
{'class': 'dt1'}
{'class': 'dt1'}
...
和a:
{'href': 'by_city_001.html'}
{'href': 'by_city_244.html'}
{'href': 'by_city_002.html'}
...