如何在scrapy中使用来自中间件的self.crawler.engine.pause()

时间:2014-01-14 20:21:26

标签: python scrapy

我试图从中间件暂停scrapy引擎(运行爬虫)。

当我尝试调用self.crawler.engine.unpause()时收到此错误:

'cRetry'对象没有属性'crawler'

这是我的中间件。如何访问爬虫对象?

class cRetry(RetryMiddleware):

   errorCounter = 0


   def process_response(self, request, response, spider):
        if response.status in self.retry_http_codes:
            reason = response_status_message(response.status)
            return self._retry(request, reason, spider) or response
        elif "error" in response.body:
            self.errorCounter = self.errorCounter + 1
            if self.errorCounter >= 10:
                self.crawler.engine.pause()
                os.system("restart.sh")
                print "Reset"
                time.sleep(10)
                self.crawler.engine.unpause()
                self.errorCounter = 0
            reason ="Restart Required"
            return self._retry(request, reason, spider) or response
       ### end
        return response 

2 个答案:

答案 0 :(得分:1)

根据我的理解,您可以覆盖__init__from_crawler方法,使其类似于:

class cRetry(RetryMiddleware):

    errorCounter = 0

    def __init__(self, crawler):
        super(cRetry, self).__init__(crawler.settings)
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_response(self, request, response, spider):
        # ...

__init__的签名似乎并不重要,entry point for the main library始终为from_crawler(cls, crawler)。这是class method并将类名作为第一个参数(然后使用它来调用构造函数)。

答案 1 :(得分:0)

感谢aufziehvogel: - )

你的建议只做了一点修改。需要添加@classmethod,然后它就像魅力一样。

class cRetry(RetryMiddleware):

    errorCounter = 0

    def __init__(self, crawler):
        super(cRetry, self).__init__(crawler.settings)
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_response(self, request, response, spider):
        # ...