我已经通过Anaconda环境在Pycharm中安装了scrapy,并且我可以毫无问题地导入scrapy。我正在尝试使用以下代码从著名的网站上刮掉著名的报价(到目前为止非常简单)
import scrapy
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = [
'http://quotes.toscrape.com'
]
def parse(self, response):
title = response.css('title').extract()
yield {'titletext':title}
但是,当我使用抓取抓取方式运行python文件时,却收到了跟随错误
2020-04-06 00:12:34 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: webcrawling)
2020-04-06 00:12:34 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Pyt
hon 3.7.7 (default, Mar 23 2020, 23:19:08) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f 31 Mar 2020), cryptography 2.9, Platform W
indows-10-10.0.18362-SP0
2020-04-06 00:12:34 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
Traceback (most recent call last):
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\spiderloader.py", line 68, in load
return self._spiders[spider_name]
KeyError: 'Scrapecode'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\olg\anaconda3\envs\python scripts\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\olg\anaconda3\envs\python scripts\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\olg\Anaconda3\envs\Python Scripts\Scripts\scrapy.exe\__main__.py", line 7, in <module>
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\cmdline.py", line 145, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\cmdline.py", line 99, in _run_print_help
func(*a, **kw)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\cmdline.py", line 153, in _run_command
cmd.run(args, opts)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
crawl_defer = self.crawler_process.crawl(spname, **opts.spargs)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\crawler.py", line 176, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\crawler.py", line 209, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\crawler.py", line 213, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "c:\users\olg\anaconda3\envs\python scripts\lib\site-packages\scrapy\spiderloader.py", line 70, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: Scrapecode'
我知道我没有以正确的方式安装Scrapy,只是通过pycharm安装了它,但这一直给我带来错误。我以为这可以解决问题,但我真的无法弄清楚这些错误。