我的蜘蛛有些问题。我使用带scrapy的splash来链接到"下一页"这是由JavaScript生成的。从第一页下载信息后,我想从以下页面下载信息,但LinkExtractor功能无法正常工作。但看起来start_request功能并不起作用。这是代码:
Downloads/
日志:
class ReutersBusinessSpider(CrawlSpider):
name = 'reuters_business'
allowed_domains = ["reuters.com"]
start_urls = (
'http://reuters.com/news/archive/businessNews?view=page&page=1',
)
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
def use_splash(self, request):
request.meta['splash'] = {
'endpoint':'render.html',
'args':{
'wait':0.5,
}
}
return request
def process_value(value):
m = re.search(r'(\?view=page&page=[0-9]&pageSize=10)', value)
if m:
return urlparse.urljoin('http://reuters.com/news/archive/businessNews',m.group(1))
rules = (
Rule(LinkExtractor(restrict_xpaths='//*[@class="pageNext"]',process_value='process_value'),process_request='use_splash', follow=False),
Rule(LinkExtractor(restrict_xpaths='//h2/*[contains(@href,"article")]',process_value='process_value'),callback='parse_item'),
)
def parse_item(self, response):
l = ItemLoader(item=PajaczekItem(), response=response)
l.add_xpath('articlesection','//span[@class="article-section"]/text()', MapCompose(unicode.strip), Join())
l.add_xpath('date','//span[@class="timestamp"]/text()', MapCompose(parse))
l.add_value('url',response.url)
l.add_xpath('articleheadline','//h1[@class="article-headline"]/text()', MapCompose(unicode.title))
l.add_xpath('articlelocation','//span[@class="location"]/text()')
l.add_xpath('articletext','//span[@id="articleText"]//p//text()', MapCompose(unicode.strip), Join())
return l.load_item()
哪里出错?谢谢你的帮助。
答案 0 :(得分:1)
快速浏览一下,您没有使用splash调用start_request属性...例如,您应该使用SplashRequest。
def start_requests(self):
for url in self.start_urls:
yield SplahRequest(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
假设您已经设置了适当的Splash,即在设置中启用了必要的中间位置并指向正确的/ url也使它们能够正常触发和HTTP缓存...不,我有不运行你的代码应该很好,现在去
所以...除非你有任何其他原因使用splash我没有理由在文章请求的初始解析中使用它一个简单的for循环,如...
for next in response.css("a.control-nav-next::attr(href)").extract():
yield scrapy.Request(response.urljoin(next), callback=self.parse...