我试图从process_links
函数访问响应(url as condition),以便我可以重写URL。有没有办法做到这一点?目前,我收到错误:process_links()只需要3个参数(给定2个)
class Spider(CrawlSpider):
name = 'spider_1'
allowed_domains = 'domain.com',
start_urls = (
'http://domain.com/new/1.html?content=image',
'http://domain.com/new/1.html?content=video',
)
rules = [
Rule(LinkExtractor(allow = (), restrict_xpaths=('//div[@class="pagination"]')), callback='parse_page', process_links='process_links', follow=True)
]
def process_links(self, links, resp):
for link in links:
if 'content=photo' in resp.url:
link.url = "%s?content=photo" % link.url
else:
link.url = "%s?content=video" % link.url
return links
答案 0 :(得分:1)
更改
def process_links(self, links, resp):
到
def process_links(self, links):
您希望在您的函数中收到回复,但Scrapy会为您提供链接。
也许这样的事情可能是你想要的:
rules = [
Rule(LinkExtractor(allow = ('content=photo'), restrict_xpaths=('//div[@class="pagination"]')), callback='parse_page', process_links='process_photo_links', follow=True),
Rule(LinkExtractor(allow = (), restrict_xpaths=('//div[@class="pagination"]')), callback='parse_page', process_links='process_video_links', follow=True),
]
def process_photo_links(self, links, resp):
for link in links:
link.url = "%s?content=photo" % link.url
return links
def process_video_links(self, links, resp):
for link in links:
link.url = "%s?content=video" % link.url
return links
评论后更新:
是的,Scrapy确实将响应传递给process_links。 您可以简单地忽略规则并生成自己的请求:
def parse_page(self, response):
...
links = LinkExtractor(allow = (), restrict_xpaths=('//div[@class="pagination"]')).extract_links(response)
for link in links:
if 'content=photo' in response.url:
link.url = "%s?content=photo" % link.url
else:
link.url = "%s?content=video" % link.url
yield scrapy.Request(link.url, callback=self.parse_page)