Scrapy-空闲信号蜘蛛运行时出错

时间:2019-03-27 13:51:38

标签: scrapy signals

我正在尝试创建一个蜘蛛,它一直在运行,并且一旦它进入空闲状态,它就应该从数据库中获取下一个要解析的URL。 不幸的是,我一开始就已经有了堆栈:

# -*- coding: utf-8 -*-
import scrapy

from scrapy import signals
from scrapy import Spider

import logging

class SignalspiderSpider(Spider):
    name = 'signalspider'
    allowed_domains = ['domain.de']

    yet = False

    def start_requests(self):
        logging.log(logging.INFO, "______ Loading requests")
        yield scrapy.Request('https://www.domain.de/product1.html')

    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        logging.log(logging.INFO, "______ From Crawler")
        spider = spider = super(SignalspiderSpider, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.idle, signal=scrapy.signals.spider_idle)
        return spider


    def parse(self, response):
        self.logger.info("______ Finished extracting structured data from HTML")
        pass

    def idle(self):
        logging.log(logging.INFO, "_______ Idle state")
        if not self.yet:
            self.crawler.engine.crawl(self.create_request(), self)
            self.yet = True


    def create_request(self):
        logging.log(logging.INFO, "_____________ Create requests")
        yield scrapy.Request('https://www.domain.de/product2.html?dvar_82_color=blau&cgid=')

和我得到的错误:

2019-03-27 21:41:38 [root] INFO: _______ Idle state
2019-03-27 21:41:38 [root] INFO: _____________ Create requests
2019-03-27 21:41:38 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method RefererMiddleware.request_scheduled of <scrapy.spidermiddlewares.referer.RefererMiddleware object at 0x7f93bcc13978>>
Traceback (most recent call last):
  File "/home/spidy/Documents/spo/lib/python3.5/site-packages/scrapy/utils/signal.py", line 30, in send_catch_log
    *arguments, **named)
  File "/home/spidy/Documents/spo/lib/python3.5/site-packages/pydispatch/robustapply.py", line 55, in robustApply
    return receiver(*arguments, **named)
  File "/home/spidy/Documents/spo/lib/python3.5/site-packages/scrapy/spidermiddlewares/referer.py", line 343, in request_scheduled
    redirected_urls = request.meta.get('redirect_urls', [])
AttributeError: 'NoneType' object has no attribute 'meta'

我在做什么错了?

1 个答案:

答案 0 :(得分:1)

尝试:

def idle(self, spider):
    logging.log(logging.INFO, "_______ Idle state")
    if not self.yet:
        self.yet = True
        self.crawler.engine.crawl(Request(url='https://www.domain.de/product2.html?dvar_82_color=blau&cgid=', callback=spider.parse), spider)

我不确定在方法spider_idle中创建请求是否正确,就像您一样,将另一个方法传递给该请求。

Scrapy spider_idle signal - need to add requests with parse item callback上查看更多信息