NameError:名称'Rule'未在python scrapy中定义

时间:2016-01-22 07:47:41

标签: python web-scraping scrapy

我有以下用于递归抓取网站的脚本:

#!/usr/bin/python 
import scrapy
from scrapy.selector import Selector
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner

class GivenSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/",
#        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
 #       "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]
    rules = (Rule(LinkExtractor(allow=r'/'), callback=parse, follow=True),)

    def parse(self, response):
        select = Selector(response)
        titles = select.xpath('//a[@class="listinglink"]/text()').extract()
        print ' [*] Start crawling at %s ' % response.url
        for title in titles:
            print '\t %s' % title


#configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})
runner = CrawlerRunner()

d = runner.crawl(GivenSpider)
d.addBoth(lambda _: reactor.stop())
reactor.run()

当我调用它时:

$ python spide.py
NameError: name 'Rule' is not defined

2 个答案:

答案 0 :(得分:0)

如果您查看文档并搜索“规则”一词,您会发现:

http://doc.scrapy.org/en/0.20/topics/spiders.html?highlight=rule#crawling-rules

由于您没有导入任何内容,很明显规则未定义。

 class scrapy.contrib.spiders.Rule(link_extractor, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=None)

因此,从理论上讲,您应该可以使用Rule

导入from scrapy.contrib.spiders import Rule课程

答案 1 :(得分:0)

LoïcFaure-Lacroix是正确的。但是在当前版本的Scrapy(1.6)中,您需要像这样从Rule导入scrapy.spiders

from scrapy.spiders import Rule

See documentation for more information