从脚本运行时,如何将用户定义的参数传递给scrapy Spider

时间:2016-07-14 10:01:28

标签: python scrapy

How to pass a user defined argument in scrapy spider类似,我试图运行一个参数(start_url)是用户定义的蜘蛛。但是,我不想从命令行运行scrapy,而是想从脚本中运行它。

我到目前为止的代码是:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess

class FundaMaxPagesSpider(CrawlSpider):
    name = "Funda_max_pages"
    allowed_domains = ["funda.nl"]
    start_urls = ["http://www.funda.nl/koop/amsterdam/"]

    le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0])   # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/

    rules = (
    Rule(le_maxpage, callback='get_max_page_number'),
    )

    def get_max_page_number(self, response):
        links = self.le_maxpage.extract_links(response)
        max_page_number = 0                                                 # Initialize the maximum page number
        for link in links:
            if link.url.count('/') == 6 and link.url.endswith('/'):         # Select only pages with a link depth of 3
                print("The link is %s" % link.url)
                page_number = int(link.url.split("/")[-2].strip('p'))       # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/'
                if page_number > max_page_number:
                    max_page_number = page_number                           # Update the maximum page number if the current value is larger than its previous value
        print("The maximum page number is %s" % max_page_number)
        place_name = link.url.split("/")[-3]                                # For example, "amsterdam" in 'http://www.funda.nl/koop/amsterdam/p10/'
        print("The place name is %s" % place_name)
        filename = str(place_name)+"_max_pages.txt"                         # File name with as prefix the place name
        with open(filename,'wb') as f:
            f.write('max_page_number = %s' % max_page_number)               # Write the maximum page number to a text file
        yield {'max_page_number': max_page_number}

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

process.crawl(FundaMaxPagesSpider)
process.start() # the script will block here until the crawling is finished

在此示例中,start_url固定为http://www.funda.nl/koop/amsterdam/,但我希望将其作为变量传递。我怎么能这样做?

1 个答案:

答案 0 :(得分:2)

scrapy crawl FundaMaxPagesSpider -a url='http://stackoverflow.com/'

相当于:

process.crawl(FundaMaxPagesSpider, url='http://stackoverflow.com/')

现在你只需要处理你提到的答案中所述的论点

def __init__(self, url='http://www.funda.nl/koop/amsterdam/'):
    self.start_urls = [url]