所以我设置的蜘蛛与scrapy的例子非常相似。
我希望蜘蛛在进入下一页之前抓取所有引号。我还希望它每秒只解析1个引号。因此,如果页面上有20个引号,则需要20秒才能删除引号然后1秒才能转到下一页。
截至目前,我的当前实现是在实际获取报价信息之前首先遍历每个页面。
import scrapy
class AuthorSpider(scrapy.Spider):
name = 'author'
start_urls = ['http://quotes.toscrape.com/']
def parse(self, response):
# follow links to author pages
for href in response.css('.author+a::attr(href)').extract():
yield scrapy.Request(response.urljoin(href),
callback=self.parse_author)
# follow pagination links
next_page = response.css('li.next a::attr(href)').extract_first()
if next_page is not None:
next_page = response.urljoin(next_page)
yield scrapy.Request(next_page, callback=self.parse)
def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()
yield {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}
以下是我的settings.py文件的基础知识
ROBOTSTXT_OBEY = True
CONCURRENT_REQUESTS = 1
DOWNLOAD_DELAY = 2
答案 0 :(得分:1)
您可以协调scrapy.Requests的生成方式。
例如,您可以创建下一页请求,但只有在所有作者请求终止抓取其项目时才会生成它。
示例:
import scrapy
# Store common info about pending request
pending_authors = {}
class AuthorSpider(scrapy.Spider):
name = 'author'
start_urls = ['http://quotes.toscrape.com/']
def parse(self, response):
# process pagination links
next_page = response.css('li.next a::attr(href)').extract_first()
next_page_request = None
if next_page is not None:
next_page = response.urljoin(next_page)
# Create the Request object, but does not yield it now
next_page_request = scrapy.Request(next_page, callback=self.parse)
# Requests scrapping of authors, and pass reference to the Request for next page
for href in response.css('.author+a::attr(href)').extract():
pending_authors[href] = False # Marks this author as 'not processed'
yield scrapy.Request(response.urljoin(href), callback=self.parse_author,
meta={'next_page_request': next_page_request})
def parse_author(self, response):
def extract_with_css(query):
return response.css(query).extract_first().strip()
item = {
'name': extract_with_css('h3.author-title::text'),
'birthdate': extract_with_css('.author-born-date::text'),
'bio': extract_with_css('.author-description::text'),
}
# marks this author as 'processed'
pending_authors[response.url] = True
# checks if finished processing of all authors
if len([value for key, value in pending_authors.iteritems() if value == False]) == 0:
yield item
next_page_request = response.meta['next_page_request']
# Requests next page, after finishinr all authors
yield next_page_request
else:
yield item