背景-TLDR:我的项目中发生内存泄漏
花了几天时间仔细查看内存泄漏文档,但找不到问题。 我正在开发一个中等大小的scrapy项目,每天大约有4万个请求。
我正在使用scrapinghub的预定运行来托管它。
在scrapinghub上,每月为$ 9,您将获得1个具有1GB RAM的VM来运行搜寻器。
我已经在本地开发了一个搜寻器,并上传到scrapinghub,唯一的问题是,在运行结束时,我已经超出了内存。
本地化设置CONCURRENT_REQUESTS=16
可以很好地工作,但会导致在scrapinghub上的内存超出50%。设置CONCURRENT_REQUESTS=4
时,内存超出了95%,因此减少到2应该可以解决问题,但是我的搜寻器变得太慢了。
另一种解决方案是为2个VM付费,以增加RAM,但是我感觉我设置搜寻器的方式会导致内存泄漏。
在此示例中,该项目将抓取在线零售商。
在本地运行时,我的memusage/max
和CONCURRENT_REQUESTS=16
是2.7gb。
我现在将遍历我的爬网结构
class Pipeline(object):
def process_item(self, item, spider):
item['stock_jsons'] = json.loads(item['stock_jsons'])['subProducts']
return item
class mainItem(scrapy.Item):
date = scrapy.Field()
url = scrapy.Field()
active_col_num = scrapy.Field()
all_col_nums = scrapy.Field()
old_price = scrapy.Field()
current_price = scrapy.Field()
image_urls_full = scrapy.Field()
stock_jsons = scrapy.Field()
class URLItem(scrapy.Item):
urls = scrapy.Field()
class ProductSpider(scrapy.Spider):
name = 'product'
def __init__(self, **kwargs):
page = requests.get('www.example.com', headers=headers)
self.num_pages = # gets the number of pages to search
def start_requests(self):
for page in tqdm(range(1, self.num_pages+1)):
url = 'www.example.com/page={page}'
yield scrapy.Request(url = url, headers=headers, callback = self.prod_url)
def prod_url(self, response):
urls_item = URLItem()
extracted_urls = response.xpath(####).extract() # Gets URLs to follow
urls_item['urls'] = [# Get a list of urls]
for url in urls_item['urls']:
yield scrapy.Request(url = url, headers=headers, callback = self.parse)
def parse(self, response) # Parse the main product page
item = mainItem()
item['date'] = DATETIME_VAR
item['url'] = response.url
item['active_col_num'] = XXX
item['all_col_nums'] = XXX
item['old_price'] = XXX
item['current_price'] = XXX
item['image_urls_full'] = XXX
try:
new_url = 'www.exampleAPI.com/' + item['active_col_num']
except TypeError:
new_url = 'www.exampleAPI.com/{dummy_number}'
yield scrapy.Request(new_url, callback=self.parse_attr, meta={'item': item})
def parse_attr(self, response):
## This calls an API Step 5
item = response.meta['item']
item['stock_jsons'] = response.text
yield item
到目前为止我尝试过什么?
psutils,没有太大帮助。
trackref.print_live_refs()
最后返回以下内容:
HtmlResponse 31 oldest: 3s ago
mainItem 18 oldest: 5s ago
ProductSpider 1 oldest: 3321s ago
Request 43 oldest: 105s ago
Selector 16 oldest: 3s ago
问题
请让我知道是否需要更多信息
请求的其他信息
请让我知道是否需要scrapinghub的输出,我想它应该是相同的,但是出于完成原因,消息超出了内存。
1。从头开始记录行(从INFO:Scrapy xxx开始到Spider打开)。
2020-09-17 11:54:11 [scrapy.utils.log] INFO: Scrapy 2.3.0 started (bot: PLT)
2020-09-17 11:54:11 [scrapy.utils.log] INFO: Versions: lxml 4.5.2.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.7.4 (v3.7.4:e09359112e, Jul 8 2019, 14:54:52) - [Clang 6.0 (clang-600.0.57)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020), cryptography 3.1, Platform Darwin-18.7.0-x86_64-i386-64bit
2020-09-17 11:54:11 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'PLT',
'CONCURRENT_REQUESTS': 14,
'CONCURRENT_REQUESTS_PER_DOMAIN': 14,
'DOWNLOAD_DELAY': 0.05,
'LOG_LEVEL': 'INFO',
'NEWSPIDER_MODULE': 'PLT.spiders',
'SPIDER_MODULES': ['PLT.spiders']}
2020-09-17 11:54:11 [scrapy.extensions.telnet] INFO: Telnet Password: # blocked
2020-09-17 11:54:11 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
=======
17_Sep_2020_11_54_12
=======
2020-09-17 11:54:12 [scrapy.middleware] INFO: Enabled item pipelines:
['PLT.pipelines.PltPipeline']
2020-09-17 11:54:12 [scrapy.core.engine] INFO: Spider opened
2。结束日志行(INFO:将Scrapy统计信息转储为结束)。
2020-09-17 11:16:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 15842233,
'downloader/request_count': 42031,
'downloader/request_method_count/GET': 42031,
'downloader/response_bytes': 1108804016,
'downloader/response_count': 42031,
'downloader/response_status_count/200': 41999,
'downloader/response_status_count/403': 9,
'downloader/response_status_count/404': 1,
'downloader/response_status_count/504': 22,
'dupefilter/filtered': 110,
'elapsed_time_seconds': 3325.171148,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2020, 9, 17, 10, 16, 43, 258108),
'httperror/response_ignored_count': 10,
'httperror/response_ignored_status_count/403': 9,
'httperror/response_ignored_status_count/404': 1,
'item_scraped_count': 20769,
'log_count/INFO': 75,
'memusage/max': 2707484672,
'memusage/startup': 100196352,
'request_depth_max': 2,
'response_received_count': 42009,
'retry/count': 22,
'retry/reason_count/504 Gateway Time-out': 22,
'scheduler/dequeued': 42031,
'scheduler/dequeued/memory': 42031,
'scheduler/enqueued': 42031,
'scheduler/enqueued/memory': 42031,
'start_time': datetime.datetime(2020, 9, 17, 9, 21, 18, 86960)}
2020-09-17 11:16:43 [scrapy.core.engine] INFO: Spider closed (finished)
我要抓取的网站大约有2万种产品,每页显示48个。因此,它转到该站点,请参阅20103产品,然后除以48(然后是math.ceil)以获取页数。
downloader/request_bytes 2945159
downloader/request_count 16518
downloader/request_method_count/GET 16518
downloader/response_bytes 3366280619
downloader/response_count 16516
downloader/response_status_count/200 16513
downloader/response_status_count/404 3
dupefilter/filtered 7
elapsed_time_seconds 4805.867308
finish_reason memusage_exceeded
finish_time 1600567332341
httperror/response_ignored_count 3
httperror/response_ignored_status_count/404 3
item_scraped_count 8156
log_count/ERROR 1
log_count/INFO 94
memusage/limit_reached 1
memusage/max 1074937856
memusage/startup 109555712
request_depth_max 2
response_received_count 16516
retry/count 2
retry/reason_count/504 Gateway Time-out 2
scheduler/dequeued 16518
scheduler/dequeued/disk 16518
scheduler/enqueued 17280
scheduler/enqueued/disk 17280
start_time 1600562526474
答案 0 :(得分:3)
1.Scheruler队列/活动请求
与self.numpages = 418
。
此代码行将创建418个请求对象(包括-要求OS委派内存以保存418个对象)并将它们放入调度程序队列中:
for page in tqdm(range(1, self.num_pages+1)):
url = 'www.example.com/page={page}'
yield scrapy.Request(url = url, headers=headers, callback = self.prod_url)
每个“页面”请求都会生成48个新请求。
每个“产品页面”请求均生成1个“ api_call”请求
每个“ api_call”请求均返回项目对象。
由于所有请求都具有相同的优先级-在最坏的情况下,应用程序将需要内存一次将大约20000个请求/响应对象保存在RAM中。
为了排除这种情况,可以将priority
参数添加到scrapy.Request
中。
可能您需要将Spider配置更改为以下内容:
def start_requests(self):
yield scrapy.Request(url = 'www.example.com/page=1', headers=headers, callback = self.prod_url)
def prod_url(self, response):
#get number of page
next_page_number = int(response.url.split("/page=")[-1] + 1
#...
for url in urls_item['urls']:
yield scrapy.Request(url = url, headers=headers, callback = self.parse, priority = 1)
if next_page_number < self.num_pages:
yield scrapy.Request(url = f"www.example.com/page={str(next_page_number)}"
def parse(self, response) # Parse the main product page
#....
try:
new_url = 'www.exampleAPI.com/' + item['active_col_num']
except TypeError:
new_url = 'www.exampleAPI.com/{dummy_number}'
yield scrapy.Request(new_url, callback=self.parse_attr, meta={'item': item}, priority = 2)
使用这种蜘蛛配置-蜘蛛仅在完成处理前一页的产品时才处理下一页的产品页面,并且您的应用程序将不会收到漫长的请求/响应队列。
2.Http压缩
许多网站都压缩html代码以减少流量负载。
例如,亚马逊网站使用gzip压缩其产品页面。
亚马逊产品页面的压缩html的平均大小〜250Kb
未压缩的html大小可能超过〜1.5Mb。
如果您的网站使用的压缩和未压缩html的响应大小类似于亚马逊产品页面的大小,则应用将需要花费大量内存来容纳压缩和未压缩的响应主体。
填充DownloaderStats
stats参数的downloader/response_bytes
中间件不会计算未压缩响应的大小,因为它是process_response
中间件的process_response
方法之前调用的HttpCompressionMiddleware
方法。
要对其进行检查,您需要通过将其添加到设置来更改Downloader stats中间件的优先级:
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.stats.DownloaderStats':50
}
在这种情况下:
downloader/request_bytes
stats参数-将减少,因为它不会计算中间件填充的某些标头的大小。
downloader/response_bytes
stats参数-如果网站使用压缩,则将大大增加。