scrapy KeyError:'找不到蜘蛛

时间:2017-02-22 08:11:32

标签: python ubuntu scrapy

我在MacBook上写了这个项目并且工作正常。但是当我将项目上传到运行ubuntu16的linux服务器时,它没有用完。

这是我得到的错误

2017-02-22 15:09:53 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'Lagou.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['Lagou.spiders'], 'BOT_NAME': 'Lagou', 'AUTOTHROTTLE_ENABLED': True, 'DOWNLOAD_DELAY': 3}
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 11, in <module>
    sys.exit(execute())
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/spiderloader.py", line 51, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: myspider'
  

ubuntu @ VM-76-113-ubuntu:〜$ tree

-- Lagou
    |-- Lagou
    |   |-- __init__.py
    |   |-- __init__.pyc
    |   |-- items.py
    |   |-- main.py
    |   |-- middlewares.py
    |   |-- pipelines.py
    |   |-- settings.py
    |   |-- settings.pyc
    |   |-- spiders
    |       |-- __init__.py
    |       |-- __init__.pyc
    |       |-- lagou_spider.py
    |       `-- lagou_spider.pyc
    -- scrapy.cfg

3个目录,13个文件

job_spider.py

class LaGouSpider(Spider):
    name = "myspider"
    allowed_domains = ["https://www.lagou.com"]
    start_urls = ["https://www.lagou.com/zhaopin/"]

scrapy.cfg

[settings]
default = LagouSpider.settings

[deploy]
#url = http://localhost:6800/
project = LagouSpider

我通过重新创建整个项目并将其重新上载到服务器来解决这个问题。但我仍然不知道为什么它之前没有工作,它是相同的python环境和代码,但它可以在我的计算机上运行,服务器上升错误

0 个答案:

没有答案