SCHEDULER_IDLE_BEFORE_CLOSE在scrapy-redis中不起作用

时间:2017-09-07 08:16:03

标签: scrapy

我设置SCHEDULER_IDLE_BEFORE_CLOSE = 10,但蜘蛛没有关闭并保持在IDLE状态。

我在RedisMixin类中重写了scrapy-redis的spider_idle函数:

def spider_idle(self):
    self.schedule_next_requests()
    # raise DontCloseSpider

现在蜘蛛会关闭但不能完全消耗掉所有的start_urls!

我是否需要自己检查redis中的start_urls和请求,以确定是否提升DontCloseSpider?

我在蜘蛛中添加了空闲功能(参考The scrapy-redis program does not close automatically

@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
    from_crawler = super(SkuSpider, cls).from_crawler
    spider = from_crawler(crawler, *args, **kwargs)
    crawler.signals.connect(spider.my_idle, signal=scrapy.signals.spider_idle)
    return spider

def my_idle(self):
    r = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT, db=REDIS_DB)
    if (r.llen(REDIS_START_URLS_KEY % {'name': self.name}) == 0 
        and r.zcard('%s:requests' % self.name) == 0):
        self.crawler.engine.close_spider(self, reason='finished')

非常感谢你的帮助!

0 个答案:

没有答案