Scrapy - 找不到蜘蛛

时间:2021-02-08 20:23:53

标签: python scrapy

我一直在尝试运行机器人,但一直找不到错误蜘蛛。我在目录中检查了蜘蛛确实在那里。下面是错误。我也尝试 chna=anged 蜘蛛名称,但没有用。任何帮助将不胜感激。谢谢。

\prabh\Anaconda3\envs\py3_knime\py3_knime) C:\Users\prabh\Downloads\storage- 
mart\storage-mart>scrapy 
 crawl storagemart
  ROSSHAVEN
 *************************
 2021-02-08 15:12:58 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: 
 public_app)
 2021-02-08 15:12:58 [scrapy.utils.log] INFO: Versions: lxml 4.1.1.0, libxml2 
 2.9.4, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 
 3.6.12 |Anaconda, Inc.| (default, Sep  9 2020, 00:29:25) [MSC v.1916 64 bit 
(AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1i  8 Dec 2020), cryptography 3.3.1, 
Platform Windows-10-10.0.18362-SP0
2021-02-08 15:12:58 [scrapy.utils.log] DEBUG: Using reactor: 
twisted.internet.selectreactor.SelectReactor
Traceback (most recent call last):
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\spiderloader.py", line 68, in load
return self._spiders[spider_name]
KeyError: 'storagemart'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\runpy.py", line 
193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\runpy.py", line 
85, in _run_code
exec(code, run_globals)
File 
"C:\Users\prabh\Anaconda3\envs\py3_knime\py3_knime\Scripts\scrapy.exe\
__main__.py", line 7, in <module>
 File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
 packages\scrapy\cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
 File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\cmdline.py", line 98, in _run_print_help
func(*a, **kw)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\cmdline.py", line 151, in _run_command
cmd.run(args, opts)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\commands\crawl.py", line 42, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\crawler.py", line 191, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\crawler.py", line 224, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\crawler.py", line 228, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
 File "c:\users\prabh\anaconda3\envs\py3_knime\py3_knime\lib\site- 
packages\scrapy\spiderloader.py", line 70, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: storagemart'

1 个答案:

答案 0 :(得分:1)

设置蜘蛛的名称属性

class MySpider(scrapy.Spider):
    name = "storagemart"

然后你会运行它:

scrapy crawl storagemart