scrapy中的延迟请求

时间:2017-10-11 22:10:06

标签: scrapy twisted

我想反复刮取不同延迟的相同网址。在研究了这个问题之后,似乎适当的解决方案是使用像

这样的东西
nextreq = scrapy.Request(url, dont_filter=True)
d = defer.Deferred()
delay = 1
reactor.callLater(delay, d.callback, nextreq)
yield d

解析。

但是,我无法完成这项工作。我收到错误消息 ERROR: Spider must return Request, BaseItem, dict or None, got 'Deferred'

我不熟悉扭曲所以我希望我只是遗漏了一些明显的东西

有没有更好的方法来实现我的目标,而不是如此对抗框架呢?

1 个答案:

答案 0 :(得分:3)

我终于在an old PR

中找到了答案
def parse():
        req = scrapy.Request(...)
        delay = 0
        reactor.callLater(delay, self.crawler.engine.schedule, request=req, spider=self)

然而,蜘蛛可能因为过早闲置而退出。基于过时的中间件https://github.com/ArturGaspar/scrapy-delayed-requests,可以使用

来解决这个问题
from scrapy import signals
from scrapy.exceptions import DontCloseSpider

class ImmortalSpiderMiddleware(object):

    @classmethod
    def from_crawler(cls, crawler):
        s = cls()
        crawler.signals.connect(s.spider_idle, signal=signals.spider_idle)
        return s

    @classmethod
    def spider_idle(cls, spider):
        raise DontCloseSpider()

最终选项,由ArturGaspar更新中间件,导致:

from weakref import WeakKeyDictionary

from scrapy import signals
from scrapy.exceptions import DontCloseSpider
from twisted.internet import reactor

class DelayedRequestsMiddleware(object):
    requests = WeakKeyDictionary()

    @classmethod
    def from_crawler(cls, crawler):
        ext = cls()
        crawler.signals.connect(ext.spider_idle, signal=signals.spider_idle)
        return ext

    @classmethod
    def spider_idle(cls, spider):
        if cls.requests.get(spider):
            spider.log("delayed requests pending, not closing spider")
            raise DontCloseSpider()

    def process_request(self, request, spider):
        delay = request.meta.pop('delay_request', None)
        if delay:
            self.requests.setdefault(spider, 0)
            self.requests[spider] += 1
            reactor.callLater(delay, self.schedule_request, request.copy(),
                              spider)
            raise IgnoreRequest()

    def schedule_request(self, request, spider):
        spider.crawler.engine.schedule(request, spider)
        self.requests[spider] -= 1

可以在解析中使用:

yield Request(..., meta={'delay_request': 5})