如何解决错误:Scrapy:启用项目管道和未处理的错误

时间:2019-05-18 06:16:38

标签: python python-3.x scrapy

我的抓取工具出现了“启用项目管道”错误和“未处理的错误”。

我是迷路世界的新手。因此,我不知道这种解决方案。另外,我使用的scrapy版本是1.6.0。

# Books.py
import scrapy
from scrapy.http import Request 

class BooksSpider(scrapy.Spider):
    name = 'Books'
    allowed_domains = ['books.toscrape.com/']
    start_urls = ['http://books.toscrape.com/']

    def parse(self, response):
        pass

# Setting.py

BOT_NAME = 'books_crawler'
SPIDER_MODULES = ['books_crawler.spiders']
NEWSPIDER_MODULE = 'books_crawler.spiders'
USER_AGENT = 'books_crawler (+http://www.yourdomain.com)'
ROBOTSTXT_OBEY = False
CONCURRENT_REQUESTS = 32

以下是输出:

2019-05-18 10:53:04 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: Books_crawling)
2019-05-18 10:53:04 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Windows-10-10.0.17134-SP0
2019-05-18 10:53:04 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'Books_crawling', 'NEWSPIDER_MODULE': 'Books_crawling.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['Books_crawling.spiders']}
2019-05-18 10:53:04 [scrapy.extensions.telnet] INFO: Telnet Password: 39e60e0922a19d52
2019-05-18 10:53:05 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2019-05-18 10:53:09 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2019-05-18 10:53:09 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2019-05-18 10:53:09 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2019-05-18 10:53:09 [scrapy.core.engine] INFO: Spider opened
2019-05-18 10:53:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:53:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
Unhandled Error
Traceback (most recent call last):
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\commands\crawl.py", line 58, in run
    self.crawler_process.start()
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\crawler.py", line 293, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 1272, in run
    self.mainLoop()
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 1281, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 902, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
    return self._func(*self._a, **self._kw)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 122, in _next_request
    if not self._next_request_from_scheduler(spider):
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 152, in _next_request_from_scheduler
    d = self._download(request, spider)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 247, in _download
    dwld = self.downloader.fetch(request, spider)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\downloader\__init__.py", line 99, in fetch
    return dfd.addBoth(_deactivate)
builtins.AttributeError: 'generator' object has no attribute 'addBoth'

2019-05-18 10:53:10 [twisted] CRITICAL: Unhandled Error
Traceback (most recent call last):
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\commands\crawl.py", line 58, in run
    self.crawler_process.start()
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\crawler.py", line 293, in start
    reactor.run(installSignalHandlers=False)  # blocking call
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 1272, in run
    self.mainLoop()
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 1281, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\twisted\internet\base.py", line 902, in runUntilCurrent
    call.func(*call.args, **call.kw)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\utils\reactor.py", line 41, in __call__
    return self._func(*self._a, **self._kw)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 122, in _next_request
    if not self._next_request_from_scheduler(spider):
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 152, in _next_request_from_scheduler
    d = self._download(request, spider)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\engine.py", line 247, in _download
    dwld = self.downloader.fetch(request, spider)
  File "c:\users\ketul\appdata\local\programs\python\python37\lib\site-packages\scrapy\core\downloader\__init__.py", line 99, in fetch
    return dfd.addBoth(_deactivate)
builtins.AttributeError: 'generator' object has no attribute 'addBoth'

2019-05-18 10:54:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:55:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:56:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:57:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:58:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 10:59:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 11:00:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 11:01:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-05-18 11:02:09 [scrapy.extensions.logstats

0 个答案:

没有答案