爬行在结束时急剧减速

时间:2016-04-20 18:02:48

标签: python performance scrapy web-crawler throughput

我有一套25,000多个网址需要我抓。我一直看到,在大约22,000个网址之后,爬行率大幅下降。

看看这些日志行以获得一些观点:

2016-04-18 00:14:06 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:15:06 [scrapy] INFO: Crawled 5324 pages (at 5324 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:16:06 [scrapy] INFO: Crawled 9475 pages (at 4151 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:17:06 [scrapy] INFO: Crawled 14416 pages (at 4941 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:18:07 [scrapy] INFO: Crawled 20575 pages (at 6159 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:19:06 [scrapy] INFO: Crawled 22036 pages (at 1461 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:20:06 [scrapy] INFO: Crawled 22106 pages (at 70 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:21:06 [scrapy] INFO: Crawled 22146 pages (at 40 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:22:06 [scrapy] INFO: Crawled 22189 pages (at 43 pages/min), scraped 0 items (at 0 items/min)
2016-04-18 00:23:06 [scrapy] INFO: Crawled 22229 pages (at 40 pages/min), scraped 0 items (at 0 items/min)

这是我的设置

# -*- coding: utf-8 -*-

BOT_NAME = 'crawler'

SPIDER_MODULES = ['crawler.spiders']
NEWSPIDER_MODULE = 'crawler.spiders'

CONCURRENT_REQUESTS = 10
REACTOR_THREADPOOL_MAXSIZE = 100
LOG_LEVEL = 'INFO'
COOKIES_ENABLED = False
RETRY_ENABLED = False
DOWNLOAD_TIMEOUT = 15
DNSCACHE_ENABLED = True
DNSCACHE_SIZE = 1024000
DNS_TIMEOUT = 10
DOWNLOAD_MAXSIZE = 1024000 # 10 MB
DOWNLOAD_WARNSIZE = 819200 # 8 MB
REDIRECT_MAX_TIMES = 3
METAREFRESH_MAXDELAY = 10
ROBOTSTXT_OBEY = True
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36' #Chrome 41

DEPTH_PRIORITY = 1
SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleFifoDiskQueue'
SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.FifoMemoryQueue'

#DOWNLOAD_DELAY = 1
#AUTOTHROTTLE_ENABLED = True
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 604800 # 7 days
COMPRESSION_ENABLED = True

DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
    'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
    'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
    'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400,
    'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550,
    'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
    'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
    'crawler.middlewares.RandomizeProxies': 740,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
    'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830,
    'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
    'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
}

PROXY_LIST = '/etc/scrapyd/proxy_list.txt'
  • 内存和CPU消耗低于10%
  • tcptrack显示没有异常的网络活动
  • iostat显示可忽略不计的磁盘i / o \

我可以看一下调试它吗?

1 个答案:

答案 0 :(得分:1)

原来问题是一个特定的域导致了积压。 URL队列将被填满并等待来自这些域的响应。由于每个IP /域只允许一个请求,因此一次处理一个。

我开始登录我的代理并将它们的组合输出拖尾,这一天很清楚。

如果没有上面的评论对话,我就不会想到这一点 - 感谢@DuckPuncher和@smci