我需要发出数千个需要会话令牌进行授权的请求。
一次排队所有请求会导致成千上万个请求失败,因为会话令牌在发出更高版本的请求之前已过期。
因此,我发出了一定数量的请求,这些请求将在会话令牌到期之前可靠地完成。
一批请求完成后,将触发spider_idle信号。
如果需要更多请求,信号处理程序将请求新的会话令牌与下一批请求一起使用。
这在正常运行一个蜘蛛或通过CrawlerProcess运行一个蜘蛛时有效。
但是,spider_idle信号由于通过CrawlerProcess运行的多个蜘蛛而失败。
一个蜘蛛将按预期执行spider_idle信号,但其他蜘蛛将失败,并出现以下异常:
2019-06-14 10:41:22 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method ?.spider_idle of <SpideIdleTest None at 0x7f514b33c550>>
Traceback (most recent call last):
File "/home/loren/.virtualenv/spider_idle_test/local/lib/python2.7/site-packages/scrapy/utils/signal.py", line 30, in send_catch_log
*arguments, **named)
File "/home/loren/.virtualenv/spider_idle_test/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "fails_with_multiple_spiders.py", line 25, in spider_idle
spider)
File "/home/loren/.virtualenv/spider_idle_test/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 209, in crawl
"Spider %r not opened when crawling: %s" % (spider.name, request)
我创建了一个存储库,其中显示了单个蜘蛛的spider_idle行为符合预期,而使用多个蜘蛛却失败了。
https://github.com/loren-magnuson/scrapy_spider_idle_test
以下是显示失败的版本:
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy import Request, signals
from scrapy.exceptions import DontCloseSpider
from scrapy.xlib.pydispatch import dispatcher
class SpiderIdleTest(scrapy.Spider):
custom_settings = {
'CONCURRENT_REQUESTS': 1,
'DOWNLOAD_DELAY': 2,
}
def __init__(self):
dispatcher.connect(self.spider_idle, signals.spider_idle)
self.idle_retries = 0
def spider_idle(self, spider):
self.idle_retries += 1
if self.idle_retries < 3:
self.crawler.engine.crawl(
Request('https://www.google.com',
self.parse,
dont_filter=True),
spider)
raise DontCloseSpider("Stayin' alive")
def start_requests(self):
yield Request('https://www.google.com', self.parse)
def parse(self, response):
print(response.css('title::text').extract_first())
process = CrawlerProcess()
process.crawl(SpiderIdleTest)
process.crawl(SpiderIdleTest)
process.crawl(SpiderIdleTest)
process.start()
答案 0 :(得分:0)
我尝试使用台球作为替代方法同时运行多个蜘蛛。
在使用台球的进程使蜘蛛同时运行之后,spider_idle信号仍然失败,但是有一个例外。
Traceback (most recent call last):
File "/home/louis_powersports/.virtualenv/spider_idle_test/lib/python3.6/site-packages/scrapy/utils/signal.py", line 30, in send_catch_log
*arguments, **named)
File "/home/louis_powersports/.virtualenv/spider_idle_test/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "test_with_billiard_process.py", line 25, in spider_idle
self.crawler.engine.crawl(
AttributeError: 'SpiderIdleTest' object has no attribute 'crawler'
这导致我尝试更改:
self.crawler.engine.crawl(
Request('https://www.google.com',
self.parse,
dont_filter=True),
spider)
到
spider.crawler.engine.crawl(
Request('https://www.google.com',
self.parse,
dont_filter=True),
spider)
有效。
不需要台球。做出上述更改后,基于Scrapy文档的原始尝试将起作用。
原始版本:
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy import Request, signals
from scrapy.exceptions import DontCloseSpider
from scrapy.xlib.pydispatch import dispatcher
class SpiderIdleTest(scrapy.Spider):
custom_settings = {
'CONCURRENT_REQUESTS': 1,
'DOWNLOAD_DELAY': 2,
}
def __init__(self):
dispatcher.connect(self.spider_idle, signals.spider_idle)
self.idle_retries = 0
def spider_idle(self, spider):
self.idle_retries += 1
if self.idle_retries < 3:
spider.crawler.engine.crawl(
Request('https://www.google.com',
self.parse,
dont_filter=True),
spider)
raise DontCloseSpider("Stayin' alive")
def start_requests(self):
yield Request('https://www.google.com', self.parse)
def parse(self, response):
print(response.css('title::text').extract_first())
process = CrawlerProcess()
process.crawl(SpiderIdleTest)
process.crawl(SpiderIdleTest)
process.crawl(SpiderIdleTest)
process.start()