scrapy递归爬行到用户定义的页面

时间:2015-07-20 16:29:42

标签: python web-scraping scrapy screen-scraping

这对于有经验的用户来说可能很容易,但我是scrapy的新手,我想要的是一个爬行到用户定义页面的蜘蛛。我现在正在尝试修改allow pattern中的__init__,但它似乎无法正常工作。目前,我的代码摘要是:

class MySpider(CrawlSpider):

    name         = "example"
    allowed_domains    = ["example.com"]
    start_urls    = ["http://www.example.com/alpha"]
    pattern = "/[\d]+$"
    rules = [
                Rule(LinkExtractor(allow=[pattern] , restrict_xpaths=('//*[@id = "imgholder"]/a', )), callback='parse_items', follow=True),
            ]

    def __init__(self, argument='' ,*a, **kw):

        super(MySpider, self).__init__(*a, **kw)

        #some inputs and operations based on those inputs

        i = str(raw_input())    #another input

        #need to change the pattern here
        self.pattern = '/' + i + self.pattern 

        #some other operations
        pass


    def parse_items(self, response):

        hxs = HtmlXPathSelector(response)
        img = hxs.select('//*[@id="imgholder"]/a')    
        item = MyItem()
        item["field1"] = "something"
        item["field2"] = "something else"
        yield item
        pass

现在假设用户输入i=2所以我想转到以/2/*some number*结尾的网址,但现在发生的事情是蜘蛛正在抓取任何模式/*some number。此更新似乎没有传播。我正在使用scrapy version 1.0.1

有什么方法吗?提前谢谢。

1 个答案:

答案 0 :(得分:1)

当您使用__init__方法调用时Rule已经设置了开头定义的模式。

但是,您可以在__init__方法中动态更改它。为此,在方法体内再次设置Rule并进行编译(如下所示):

def __init__(self, argument='' ,*a, **kw):
    super(MySpider, self).__init__(*a, **kw)
    # set your pattern here to what you need it
    MySpider.rules = rules = [ Rule(LinkExtractor(allow=[pattern] , restrict_xpaths=('//*[@id = "imgholder"]/a', )), callback='parse_items', follow=True), ]
    # now it is time to compile the new rules:
    super(MySpider, self)._compile_rules()