在URL中包含“alpha”的链接上有许多链接(hrefs),我想从20个不同的页面收集这些链接并粘贴到通用URL的末尾(第二行最后一行)。 href可以在一个表中找到,该类对于td是mys-elastic mys-left,而a显然是包含href属性的元素。任何帮助都会非常感激,因为我已经在这里工作了大约一个星期。
for i in range(1, 11):
# The HTML Scraper for the 20 pages that list all the exhibitors
url = 'http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm?alpha=%40&type=alpha&page=' + str(i) + '#GotoResults'
print url
list_html = scraperwiki.scrape(url)
root = lxml.html.fromstring(list_html)
href_element = root.cssselect('td.mys-elastic mys-left a')
for element in href_element:
# Convert HTMl to lxml Object
href = href_element.get('href')
print href
page_html = scraperwiki.scrape('http://ahr13.mapyourshow.com' + href)
print page_html
答案 0 :(得分:12)
无需使用javascript进行捣乱 - 这一切都在html:
中import scraperwiki
import lxml.html
html = scraperwiki.scrape('http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm? alpha=%40&type=alpha&page=1')
root = lxml.html.fromstring(html)
# get the links
hrefs = root.xpath('//td[@class="mys-elastic mys-left"]/a')
for href in hrefs:
print 'http://ahr13.mapyourshow.com' + href.attrib['href']
答案 1 :(得分:1)
import lxml.html as lh
from itertools import chain
URL = 'http://ahr13.mapyourshow.com/5_0/exhibitor_results.cfm?alpha=%40&type=alpha&page='
BASE = 'http://ahr13.mapyourshow.com'
path = '//table[2]//td[@class="mys-elastic mys-left"]//@href'
results = []
for i in range(1,21):
doc=lh.parse(URL+str(i))
results.append(BASE+i for i in doc.xpath(path))
print list(chain(*results))