Scrapy Feed输出包含多次预期输出,而不是仅包含一次

时间:2016-07-14 06:44:34

标签: python scrapy

我写过一只蜘蛛,其唯一目的是从http://www.funda.nl/koop/amsterdam/中提取一个数字,即底部寻呼机的最大页数(例如,示例中的数字255)下文)。

enter image description here

我设法使用LinkExtractor基于这些页面的URL匹配的正则表达式来完成此操作。蜘蛛如下所示:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess
from Funda.items import MaxPageItem

class FundaMaxPagesSpider(CrawlSpider):
    name = "Funda_max_pages"
    allowed_domains = ["funda.nl"]
    start_urls = ["http://www.funda.nl/koop/amsterdam/"]

    le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0])   # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/

    rules = (
    Rule(le_maxpage, callback='get_max_page_number'),
    )

    def get_max_page_number(self, response):
        links = self.le_maxpage.extract_links(response)
        max_page_number = 0                                                 # Initialize the maximum page number
        page_numbers=[]
        for link in links:
            if link.url.count('/') == 6 and link.url.endswith('/'):         # Select only pages with a link depth of 3
                page_number = int(link.url.split("/")[-2].strip('p'))       # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/'
                page_numbers.append(page_number)
                # if page_number > max_page_number:
                #     max_page_number = page_number                           # Update the maximum page number if the current value is larger than its previous value
        max_page_number = max(page_numbers)
        print("The maximum page number is %s" % max_page_number)
        yield {'max_page_number': max_page_number}

如果我通过在命令行输入scrapy crawl Funda_max_pages -o funda_max_pages.json来使用Feed输出运行它,生成的JSON文件如下所示:

[
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257},
{"max_page_number": 257}
]

我觉得奇怪的是dict输出了7次而不是一次。毕竟,yield语句不在for循环之内。任何人都可以解释这种行为吗?

2 个答案:

答案 0 :(得分:3)

  1. 你的蜘蛛去了第一个start_url。
  2. 使用LinkExtractor提取7个网址。
  3. 下载这7个网址中的每一个,并在每个网址上调用get_max_page_number
  4. 对于每个网址get_max_page_number都会返回一个字典。

答案 1 :(得分:0)

作为一种解决方法,我已将输出写入要使用的文本文件而不是JSON提要输出:

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy.crawler import CrawlerProcess

class FundaMaxPagesSpider(CrawlSpider):
    name = "Funda_max_pages"
    allowed_domains = ["funda.nl"]
    start_urls = ["http://www.funda.nl/koop/amsterdam/"]

    le_maxpage = LinkExtractor(allow=r'%s+p\d+' % start_urls[0])   # Link to a page containing thumbnails of several houses, such as http://www.funda.nl/koop/amsterdam/p10/

    rules = (
    Rule(le_maxpage, callback='get_max_page_number'),
    )

    def get_max_page_number(self, response):
        links = self.le_maxpage.extract_links(response)
        max_page_number = 0                                                 # Initialize the maximum page number
        for link in links:
            if link.url.count('/') == 6 and link.url.endswith('/'):         # Select only pages with a link depth of 3
                print("The link is %s" % link.url)
                page_number = int(link.url.split("/")[-2].strip('p'))       # For example, get the number 10 out of the string 'http://www.funda.nl/koop/amsterdam/p10/'
                if page_number > max_page_number:
                    max_page_number = page_number                           # Update the maximum page number if the current value is larger than its previous value
        print("The maximum page number is %s" % max_page_number)
        place_name = link.url.split("/")[-3]                                # For example, "amsterdam" in 'http://www.funda.nl/koop/amsterdam/p10/'
        print("The place name is %s" % place_name)
        filename = str(place_name)+"_max_pages.txt"                         # File name with as prefix the place name
        with open(filename,'wb') as f:
            f.write('max_page_number = %s' % max_page_number)               # Write the maximum page number to a text file
        yield {'max_page_number': max_page_number}

process = CrawlerProcess({
    'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})

process.crawl(FundaMaxPagesSpider)
process.start() # the script will block here until the crawling is finished

我还调整了蜘蛛将其作为脚本运行。该脚本将生成一个文本文件amsterdam_max_pages.txt,其中包含一行max_page_number: 257