如何在scrapy中访问爬行蜘蛛中的命令行参数?

时间:2014-04-29 03:47:06

标签: python scrapy

我想在scrapy crawl ...命令行中传递一个参数,以便在扩展CrawlSpider的规则定义中使用,如下所示

name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']

rules = (
    # Extract links matching 'category.php' (but not matching 'subsection.php')
    # and follow links from them (since no callback means follow=True by default).
    Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),

    # Extract links matching 'item.php' and parse them with the spider's method parse_item
    Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
)

我希望在命令行参数中指定SgmlLinkExtractor中的allow属性。 我用google搜索并发现我可以在spider的__init__方法中获取参数值,但是如何在命令行中获取参数以在规则定义中使用?

1 个答案:

答案 0 :(得分:5)

您可以在rules方法中构建Spider的__init__属性,例如:

class MySpider(CrawlSpider):

    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com']

    def __init__(self, allow=None, *args, **kwargs):
        self.rules = (
            Rule(SgmlLinkExtractor(allow=(self.allow,),)),
        )
        super(MySpider, self).__init__(*args, **kwargs)

您在命令行上传递allow属性,如下所示:

scrapy crawl example.com -a allow="item\.php"