如何获得scrapy中多次刮擦的统计数据?

时间:2017-08-14 08:52:30

标签: python web-scraping scrapy

由于我正在运行多个蜘蛛并依赖CrawlerProcess代替Crawler,因此我无法获得以下StackOverflow应答功能。

How to get stats from a scrapy run?

我想使用类似get_stats()的内容访问两次运行的统计信息,但无法确定哪个对象具有get_stats()属性。非常感谢任何帮助。

import scrapy
from scrapy.crawler import CrawlerProcess

class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def parse(self, response):
        yield {
             'name': response.css('small.author::text').extract_first()
        }

class QuotesSpider1(QuotesSpider):
    name = "quotes1"
    start_urls = ['http://quotes.toscrape.com/page/1/']

class QuotesSpider2(QuotesSpider):
    name = "quotes2"
    start_urls = ['http://quotes.toscrape.com/page/2/']

if __name__ == "__main__":
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'FEED_FORMAT': 'jsonlines',
        'FEED_URI': 'result.jl',
    })
    process.crawl(QuotesSpider1)
    process.crawl(QuotesSpider2)
    process.start()

0 个答案:

没有答案