Scrapy性能改进和记忆消耗

时间:2016-08-26 19:12:00

标签: python scrapy scrapy-spider

服务器

  • 6 GB RAM
  • 4核心Intel Xeon 2.60GHz
  • 32 CONCURRENT_REQUESTS
  • CSV中的1m网址
  • 700 Mbit / s下游
  • 96%记忆消耗

启用调试模式后,刮除大约400 000个网址后停止,很可能是因为服务器内存不足。 没有调试模式,它需要长达5天,这是非常慢的imo和 它占用了大量内存(96%)

非常欢迎任何提示:)

import scrapy
import csv

def get_urls_from_csv():
    with open('data.csv', newline='') as csv_file:
        data = csv.reader(csv_file, delimiter=',')
        scrapurls = []
        for row in data:
            scrapurls.append("http://"+row[2])
        return scrapurls

class rssitem(scrapy.Item):
    sourceurl = scrapy.Field()
    rssurl = scrapy.Field()


class RssparserSpider(scrapy.Spider):
    name = "rssspider"
    allowed_domains = ["*"]
    start_urls = ()

    def start_requests(self):
        return [scrapy.http.Request(url=start_url) for start_url in get_urls_from_csv()]

    def parse(self, response):
        res = response.xpath('//link[@type="application/rss+xml"]/@href')
        for sel in res:
            item = rssitem()
            item['sourceurl']=response.url
            item['rssurl']=sel.extract()
            yield item

        pass

2 个答案:

答案 0 :(得分:1)

正如我评论你应该使用generators来避免在内存中创建对象列表(what-does-the-yield-keyword-do-in-python),使用生成器对象是懒惰创建的,这样你就不会创建大对象列表了记忆一次:

def get_urls_from_csv():
    with open('data.csv', newline='') as csv_file:
        data = csv.reader(csv_file, delimiter=',')
        for row in data:
            yield "http://"+row[2]) # yield each url lazily


class rssitem(scrapy.Item):
    sourceurl = scrapy.Field()
    rssurl = scrapy.Field()


class RssparserSpider(scrapy.Spider):
    name = "rssspider"
    allowed_domains = ["*"]
    start_urls = ()

    def start_requests(self):
        # return a generator expresion.
        return (scrapy.http.Request(url=start_url) for start_url in get_urls_from_csv())

    def parse(self, response):
        res = response.xpath('//link[@type="application/rss+xml"]/@href')
        for sel in res:
            item = rssitem()
            item['sourceurl']=response.url
            item['rssurl']=sel.extract()
            yield item

就性能而言,Broad Crawls上的文档建议尝试increase concurrency的是:

并发性是并行处理的请求数。存在全局限制和每个域限制。 Scrapy中的默认全局并发限制不适合并行抓取多个不同的域,因此您需要增加它。增加多少将取决于您的爬虫可用的CPU数量。一个很好的起点是100,但最好的方法是通过做一些试验并确定你的Scrapy进程获得CPU限制的并发性。 为了获得最佳性能,你应该选择CPU使用率的并发性是80-90%。

增加全局并发使用:

CONCURRENT_REQUESTS = 100

强调我的。

另外Increase Twisted IO thread pool maximum size

目前,Scrapy使用线程池以阻塞方式进行DNS解析。使用更高的并发级别,爬网可能会很慢,甚至无法达到DNS解析器超时。增加处理DNS查询的线程数的可能解决方案。将更快地处理DNS队列,加快建立连接和整体爬行的速度。

要增加最大线程池大小,请使用:

 REACTOR_THREADPOOL_MAXSIZE = 20

答案 1 :(得分:0)

import csv
from collections import namedtuple

import scrapy


def get_urls_from_csv():
    with open('data.csv', newline='') as csv_file:
        data = csv.reader(csv_file, delimiter=',')
        for row in data:
            yield row[2]


# if you can use something else than scrapy
rssitem = namedtuple('rssitem', 'sourceurl rssurl')


class RssparserSpider(scrapy.Spider):
    name = "rssspider"
    allowed_domains = ["*"]
    start_urls = ()

    def start_requests(self): # remember that it returns generator
        for start_url in get_urls_from_csv():
            yield scrapy.http.Request(url="http://{}".format(start_url))

    def parse(self, response):
        res = response.xpath('//link[@type="application/rss+xml"]/@href')
        for sel in res:
            yield rssitem(response.url, sel.extract())
        pass