无法使抓紧的教程起作用。
正在尝试学习草率,但是甚至连教程都无法运行。我试图在python 3.7&3.5.5中运行它,结果相同
进口沙皮
类QuotesSpider(scrapy.Spider): name =“ quotes”
def start_requests(self):
urls = [
'http://quotes.toscrape.com/page/1/',
'http://quotes.toscrape.com/page/2/',
]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
page = response.url.split("/")[-2]
filename = 'quotes-%s.html' % page
with open(filename, 'wb') as f:
f.write(response.body)
self.log('Saved file %s' % filename)
这似乎运行正常。至少它不会抛出错误。
当我在Anaconda提示窗口中运行“ scrapy crawl quotes”时,我得到了:
"hed) C:\Users\userOne\python script files\scrapy\tutorial>scrapy crawl
quotes
2019-01-23 18:34:27 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot:
tutorial)
2019-01-23 18:34:27 [scrapy.utils.log] INFO: Versions: lxml 4.2.3.0, libxml2
2.9.5, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python
3.5.5 | packaged by conda-forge | (default, Jul 24 2018, 01:52:17) [MSC
v.1900 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.0.2p 14 Aug 2018),
cryptography 2.3.1, Platform Windows-10-10.0.17134-SP0
Traceback (most recent call last):
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\spiderloader.py", line 69, in load
return self._spiders[spider_name]
KeyError: 'quotes'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\userOne\Anaconda3\envs\hed\Scripts\scrapy-script.py", line
10, in <module>
sys.exit(execute())
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\cmdline.py", line 150, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\cmdline.py", line 90, in _run_print_help
func(*a, **kw)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\cmdline.py", line 157, in _run_command
cmd.run(args, opts)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\crawler.py", line 170, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\crawler.py", line 198, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\crawler.py", line 202, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "C:\Users\userOne\Anaconda3\envs\hed\lib\site- packages\scrapy\spiderloader.py", line 71, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: quotes'
”
输出应与此类似:
"016-12-16 21:24:05 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-12-16 21:24:05 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/2/> (referer: None)
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-1.html
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-2.html
2016-12-16 21:24:05 [scrapy.core.engine] INFO: Closing spider (finished)"
提前感谢您可以提供的帮助。
答案 0 :(得分:2)
也许您的源代码放置在错误的目录中?
我有一个非常相似的问题,即使不是一样。 (我没有使用Anaconda,但错误也是“第69行,在加载返回self._spiders [spider_name] KeyError:'quotes'”。
对我来说,解决的问题是将源代码文件(quotes_spider.py)从projectname / tutorial / tutorial目录移动到projectname / tutorial / tutorial / spiders目录。
从教程页面开始。 。 。 “这是我们第一个Spider的代码。将其保存在项目中tutorial / spiders目录下的quotes_spider.py文件中”。
答案 1 :(得分:0)
我相信我找到了答案。本教程没有提到仅通过
创建项目后在命令行中提到的一个步骤scrapy startproject tutorial
除了创建您的教程项目外,该命令的输出是
You can start your first spider with:
cd tutorial
scrapy genspider example example.com
要运行该教程,您需要输入
scrapy genspider quotes quotes.toscrape.com
答案 2 :(得分:0)
name
是必需的,并且对于您创建的每个蜘蛛都是唯一的。
您可以检查此博客以开始使用Scrapy https://www.inkoop.io/blog/web-scraping-using-python-and-scrapy/