来自命令行参数的蜘蛛

时间:2013-07-29 10:34:06

标签: python scrapy

以下蜘蛛代码以k.1375093834.0.txt形式输出文件。我想要的是kickstarter.com.1375093834.0.txt

形式的文件名

任何建议的代码更改都会非常有用

class shnurl(CrawlSpider):
    name = "shnurl"
    #start_urls = [
    # "http://www.blogger.com"
    # ]
    rules = [
        Rule(SgmlLinkExtractor(),follow=True, callback="parse")
    ]

    def __init__(self, *args, **kwargs):

        #Initialize the parent class.
        super(shnurl, self).__init__(*args, **kwargs)

        #Get the start URL from the command line.
        self.start_urls = [kwargs.get('start_url')]

        #Create a results file based on the start_url + current time.
        self.fname = '{0}.{1}.{2}'.format(self.start_url[12], time.time(),'txt')
        self.fileout = open(self.fname, 'w+')

        #Create a logfile based on the start_url + current time.
        #Log file stores the errors, debug & info prints.
        logfname = '{0}.{1}.{2}'.format(self.start_url[12], time.time(),'log')
        #log.start(logfile='./runtime.log', loglevel=log.INFO)
        log.start(logfile=logfname, loglevel=log.INFO)
        self.log('Output will be written to: {0}'.format(self.fname), log.INFO)
        #End of constructor

用法: -

scrapy crawl shnurl -a start_url="https://www.kickstarter.com"

1 个答案:

答案 0 :(得分:1)

假设我已经理解了这个问题,你想在start_url上做一个切片,但是你已经错误地定义了它。按照以下方法在方括号中的12之后放一个冒号,这将解决问题:

    self.fname = '{0}.{1}.{2}'.format(self.start_url[12:], time.time(),'txt')
    logfname = '{0}.{1}.{2}'.format(self.start_url[12:], time.time(),'log')