我使用以下在线发现的代码以递归方式刮取多页上的链接。它应该以递归方式返回我在所有页面上所需的所有链接。但我最终只获得了100个链接。任何建议都会有所帮助。
class MySpider(CrawlSpider):
name = "craigs"
allowed_domains = ["craigslist.org"]
start_urls = ["http://seattle.craigslist.org/search/jjj?is_parttime=1"]
rules = (Rule (SgmlLinkExtractor(allow=("index\d00\.html", ),restrict_xpaths=('//a[@class="button next"]',))
, callback="parse_items", follow= True),
)
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
titles = hxs.select('//span[@class="pl"]')
items = []
for titles in titles:
item = CraigslistSampleItem()
item ["title"] = titles.select("a/text()").extract()
item ["link"] = titles.select("a/@href").extract()
items.append(item)
return(items)
答案 0 :(得分:1)
只需删除allow=("index\d00\.html", )
即可解析next
链接:
rules = (Rule(SgmlLinkExtractor(restrict_xpaths=('//a[@class="button next"]',)),
callback="parse_items", follow= True),)