为什么我的scrapy脚本丢失了?

时间:2014-02-21 21:07:47

标签: python scrapy

它可能是基本的东西,但我找不到任何最近的(不推荐的)示例。给出以下代码

#This is the tutorial project for scrapy

from scrapy.item import Item, Field

class DmozItem(Item):
    title = Field()
    link = Field()
    desc = Field()

from scrapy.spider import Spider

class DmozSpider(Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"

    ]

    def parse(self, response):
        filename = response.url.split("/") [-2]
        open(filename, 'wb').write(response.body)

我收到此错误消息

jacob@Harold ~/Desktop/Scrapy_Projects/tutorial $ scrapy list
jacob@Harold ~/Desktop/Scrapy_Projects/tutorial $ scrapy crawl dmoz
2014-02-21 15:24:37-0400 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)
2014-02-21 15:24:37-0400 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2014-02-21 15:24:37-0400 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-02-21 15:24:37-0400 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-02-21 15:24:37-0400 [scrapy] DEBUG: Enabled item pipelines: 
Traceback (most recent call last):
  File "/usr/bin/scrapy", line 4, in <module>
    execute()
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 132, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 139, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 43, in create
    raise KeyError("Spider not found: %s" % spider_name)
KeyError: 'Spider not found: dmoz'

我认为它可能是非常基本的东西,我试图寻找我可以查看的例子,看看它是什么,但我没有发现任何我认为最近的内容。

提前感谢您的帮助!

2 个答案:

答案 0 :(得分:1)

您是否按照说明操作:

scrapy startproject tutorial

这将创建项目,然后将脚本保存在dmoz_spider.py目录下名为tutorial/spiders的文件中

我只是用你的脚本尝试过它并且工作正常。

答案 1 :(得分:1)

首先,下面的代码出现在一个名为items.py的python文件中(在使用hwatkins提到的startproject命令后,应该遵循所有这些步骤)

from scrapy.item import Item, Field

class DmozItem(Item):
        title = Field()
        link = Field()
        desc = Field()

后面的其余代码来自spider.py文件,该文件应该在项目的spiders文件夹中创建。

如果你想在蜘蛛中使用它,你还需要在你的蜘蛛中导入DmozItem from dmoz.items import DmozItem

我建议你再次仔细阅读教程