scrapy:" YourSpider"对象没有属性' crawler'

时间:2018-05-23 15:29:07

标签: python scrapy web-crawler

我正在为项目构建一个简单的抓取工具,我的代码中出现此错误。无论如何它运行但我想理解并解决它。 我的蜘蛛看起来像这样:

class BookSpider(scrapy.Spider):

name = "books"

@classmethod
def from_crawler(cls, crawler):
    return cls(crawler.stats)

def __init__(self, stats):
    self.stats = stats
    self.visited_pages = []

错误消息如下所示:

2018-05-23 17:00:50 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://www.goodreads.com/book/show/35036409-my-brilliant-friend> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2018-05-23 17:00:50 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.goodreads.com/book/show/17465515-the-story-of-a-new-name> (referer: https://www.goodreads.com/book/show/35036409-my-brilliant-friend)
Traceback (most recent call last):
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
GeneratorExit
Unhandled error in Deferred:
2018-05-23 17:00:50 [twisted] CRITICAL: Unhandled error in Deferred:

2018-05-23 17:00:50 [twisted] CRITICAL: 
Traceback (most recent call last):
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
result = next(self._iterator)
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/utils/defer.py", line 63, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
self.crawler.engine.crawl(request=output, spider=spider)
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/core/engine.py", line 210, in crawl
self.schedule(request, spider)
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/core/engine.py", line 216, in schedule
if not self.slot.scheduler.enqueue_request(request):
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 55, in enqueue_request
self.df.log(request, self.spider)
  File "/home/m17/elefano/miniconda3/lib/python3.6/site-packages/scrapy/dupefilters.py", line 73, in log
spider.crawler.stats.inc_value('dupefilter/filtered', spider=spider)
AttributeError: 'BookSpider' object has no attribute 'crawler'

我有一个模糊的想法,它是初始化的问题,但我无法弄清楚它有什么问题。

1 个答案:

答案 0 :(得分:1)

我认为您的蜘蛛不能正确地从爬虫类继承。遇到此错误时,我可以通过在from_crawler()方法中添加一个super()调用来解决此错误,该调用将爬虫属性/方法引入您的自定义蜘蛛程序中。

这是一个示例(请参见from_crawler方法):

from scrapy import signals
from scrapy import Spider


class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
    ]


    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        spider = super(DmozSpider, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.spider_closed, signal=signals.spider_closed)
        return spider


    def spider_closed(self, spider):
        spider.logger.info('Spider closed: %s', spider.name)


    def parse(self, response):
        pass

来源: https://doc.scrapy.org/en/latest/topics/signals.html