Python Scrapy教程KeyError:'未找到蜘蛛:

时间:2014-10-14 11:21:00

标签: python scrapy

我正在尝试编写我的第一个scrapy蜘蛛,我一直在http://doc.scrapy.org/en/latest/intro/tutorial.html关注教程但是我收到一个错误“KeyError:'Spi​​der not found:”

我想我正在从正确的目录(带有scrapy.cfg文件的目录)运行命令

(proscraper)#( 10/14/14@ 2:06pm )( tim@localhost ):~/Workspace/Development/hacks/prosum-scraper/scrapy
   tree
.
├── scrapy
│   ├── __init__.py
│   ├── items.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders
│       ├── __init__.py
│       └── juno_spider.py
└── scrapy.cfg

2 directories, 7 files
(proscraper)#( 10/14/14@ 2:13pm )( tim@localhost ):~/Workspace/Development/hacks/prosum-scraper/scrapy
   ls
scrapy  scrapy.cfg

这是我得到的错误

(proscraper)#( 10/14/14@ 2:13pm )( tim@localhost ):~/Workspace/Development/hacks/prosum-scraper/scrapy
   scrapy crawl juno
/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/twisted/internet/_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimentary TLS client hostnameverification.  Many valid certificate/hostname mappings may be rejected.
  verifyHostname, VerificationError = _selectVerifyImplementation()
Traceback (most recent call last):
  File "/home/tim/.virtualenvs/proscraper/bin/scrapy", line 9, in <module>
    load_entry_point('Scrapy==0.24.4', 'console_scripts', 'scrapy')()
  File "/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/cmdline.py", line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
    func(*a, **kw)
  File "/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 58, in run
    spider = crawler.spiders.create(spname, **opts.spargs)
  File "/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/spidermanager.py", line 44, in create
    raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: juno'

这是我的生活:

(proscraper)#( 10/14/14@ 2:13pm )( tim@localhost ):~/Workspace/Development/hacks/prosum-scraper/scrapy
   pip freeze
Scrapy==0.24.4
Twisted==14.0.2
cffi==0.8.6
cryptography==0.6
cssselect==0.9.1
ipdb==0.8
ipython==2.3.0
lxml==3.4.0
pyOpenSSL==0.14
pycparser==2.10
queuelib==1.2.2
six==1.8.0
w3lib==1.10.0
wsgiref==0.1.2
zope.interface==4.1.1

以下是填充了name属性的蜘蛛的代码:

(proscraper)#( 10/14/14@ 2:14pm )( tim@localhost ):~/Workspace/Development/hacks/prosum-scraper/scrapy
   cat scrapy/spiders/juno_spider.py 
import scrapy

class JunoSpider(scrapy.Spider):
    name = "juno"
    allowed_domains = ["http://www.juno.co.uk/"]
    start_urls = [
        "http://www.juno.co.uk/dj-equipment/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        with open(filename, 'wb') as f:
            f.write(response.body)

1 个答案:

答案 0 :(得分:8)

当您使用 scrapy 作为项目名称启动项目时,它会创建您打印的目录结构:

.
├── scrapy
│   ├── __init__.py
│   ├── items.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders
│       ├── __init__.py
│       └── juno_spider.py
└── scrapy.cfg

但使用 scrapy 作为项目名称具有附带效果。如果您打开生成的scrapy.cfg,您会看到默认设置指向scrapy.settings模块。

[settings]
default = scrapy.settings

当我们看到scrapy.settings文件时,我们会看到:

BOT_NAME = 'scrapy'

SPIDER_MODULES = ['scrapy.spiders']
NEWSPIDER_MODULE = 'scrapy.spiders'
嗯,这里没什么奇怪的。机器人名称,Scrapy将寻找蜘蛛的模块列表,以及使用genspider命令创建新蜘蛛的模块。到目前为止,非常好。

现在让我们检查 scrapy 库。它已在/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy目录下的proscraper isolated virtualenv下正确安装。请记住,site-packages始终添加到sys.path,其中包含Python要搜索模块的所有路径。所以,猜猜看... scrapy库还有一个settings模块/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/settings导入/home/tim/.virtualenvs/proscraper/lib/python2.7/site-packages/scrapy/settings/default_settings.py,其中包含所有设置的默认值。特别注意默认的SPIDER_MODULES条目:

SPIDER_MODULES = []

也许你开始了解正在发生的事情。选择 scrapy 作为项目名称也会生成一个与scrapy库scrapy.settings冲突的scrapy.settings模块。在这里,sys.path中相应路径的插入顺序将使Python导入一个或另一个。首先出现胜利。在这种情况下,scrapy库设置获胜。因此KeyError: 'Spider not found: juno'

要解决此冲突,您可以将项目文件夹重命名为其他名称,让我们说scrap

.
├── scrap
│   ├── __init__.py

修改您的scrapy.cfg以指向正确的settings模块:

[settings]
default = scrap.settings

并更新您的scrap.settings以指向正确的蜘蛛:

SPIDER_MODULES = ['scrap.spiders']

但是@paultrmbrth建议我用另一个名字重新创建项目。