我需要我的scrapy继续下一页请给我正确的规则代码,怎么写?
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.selector import Selector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from delh.items import DelhItem
class criticspider(CrawlSpider):
name ="delh"
allowed_domains =["consumercomplaints.in"]
#start_urls =["http://www.consumercomplaints.in/?search=delhivery&page=2","http://www.consumercomplaints.in/?search=delhivery&page=3","http://www.consumercomplaints.in/?search=delhivery&page=4","http://www.consumercomplaints.in/?search=delhivery&page=5","http://www.consumercomplaints.in/?search=delhivery&page=6","http://www.consumercomplaints.in/?search=delhivery&page=7","http://www.consumercomplaints.in/?search=delhivery&page=8","http://www.consumercomplaints.in/?search=delhivery&page=9","http://www.consumercomplaints.in/?search=delhivery&page=10","http://www.consumercomplaints.in/?search=delhivery&page=11"]
start_urls=["http://www.consumercomplaints.in/?search=delhivery"]
rules = (Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@class="pagelinks"]/a/@href',)),
callback="parse_gen", follow= True),
)
def parse_gen(self,response):
hxs = Selector(response)
sites = hxs.select('//table[@width="100%"]')
items = []
for site in sites:
item = DelhItem()
item['title'] = site.select('.//td[@class="complaint"]/a/span/text()').extract()
item['content'] = site.select('.//td[@class="compl-text"]/div/text()').extract()
items.append(item)
return items
spider=criticspider()
答案 0 :(得分:1)
根据我的理解,你试图刮掉两种不同的页面,因此你应该使用两种不同的规则:
您的规则应该类似于:
rules = (
Rule(LinkExtractor(restrict_xpaths='{{ item selector }}'), callback='parse_gen'),
Rule(LinkExtractor(restrict_xpaths='//div[@class="pagelinks"]/a[contains(text(), "Next")]/@href')),
)
解释:
parse_gen
)作为回调。生成的回复不会再次通过这些规则。注意:
SgmlLinkExtractor
已过时,您应使用LxmlLinkExtractor
(或其别名LinkExtractor
)代替(source)[contains(text(), "Next")]
选择器添加到“pagelinks”规则的原因。这样,每个“列表页面”只能被请求一次