我正在为contstant(每个30-120秒)编写一个脚本来获取大量URL(Icecast / Shoutcast服务器状态页面)的信息,大约500个网址。它工作正常,但python进程驻留大小不断增长。我确信它会无限增长,因为我让它运行了几个小时,从最初的30Mb开始需要1.2Gb RES。
我简化了脚本以便于理解以下内容:
from twisted.internet import reactor
from twisted.web.client import getPage
from twisted.enterprise import adbapi
def ok(res, url):
print "OK: " + str(url)
reactor.callLater(30, load, url)
def error(res, url):
print "FAIL: " + str(url)
reactor.callLater(30, load, url)
def db_ok(res):
for item in res:
if item[1]:
print "ADDED: " + str(item[1])
reactor.callLater(30, load, item[1])
def db_error(res):
print "Database error: " + str(res)
reactor.stop()
def load(url):
d = getPage(url,
headers={"Accept": "text/html"},
timeout=30)
d.addCallback(ok, url)
d.addErrback(error, url)
dbpool = adbapi.ConnectionPool("MySQLdb", "host", "user", "passwd", db="db")
q = dbpool.runQuery("SELECT id, url FROM stations")
q.addCallback(db_ok).addErrback(db_error)
reactor.run()
它与原始守护进程一样增长,因此我对问题进行了本地化。我认为它与twisted.web.client.getPage()不知何故有关。在原始的守护进程中,我在运行时使用了twisted.manhole用于堆积evalutaions和meliae,但是没有看到任何令人讨厌的东西。
仅在1或2个查询周期后立即进行的第一个meliae转储完成:
Total 84313 objects, 188 types, Total size = 15.9MiB (16647235 bytes)
Index Count % Size % Cum Max Kind
0 5806 6 4142800 24 24 786712 dict
1 28070 33 2223457 13 38 4874 str
2 612 0 1636992 9 48 3424 HTTPClientFactory
3 19599 23 1585720 9 57 608 tuple
4 643 0 720160 4 61 1120 DelayedCall
5 642 0 713904 4 66 1112 Client
6 617 0 691040 4 70 1120 Connector
7 639 0 577656 3 73 904 type
8 691 0 556576 3 77 1120 Deferred
9 3962 4 475440 2 80 120 function
10 3857 4 462840 2 82 120 code
11 3017 3 308192 1 84 4856 list
12 240 0 266880 1 86 1112 Method
13 2968 3 237440 1 87 80 instancemethod
14 612 0 215424 1 88 352 InsensitiveDict
15 217 0 211128 1 90 12624 module
16 2185 2 157320 0 91 72 builtin_function_or_method
17 107 0 119840 0 91 1120 HTTPPageGetter
18 343 0 117992 0 92 344 IcecastRadioStation
19 343 0 117992 0 93 344 HTTPExtractor
在那个时候最重要的是:
VIRT RES SHR S %CPU %MEM TIME+ COMMAND
248m 27m 4152 R 92 1.6 0:09.21 python
现在我们等待一段时间并再次检查,这是运行20分钟后的图片(大约40个查询周期):
Total 67428 objects, 188 types, Total size = 11.9MiB (12463799 bytes)
Index Count % Size % Cum Max Kind
0 3865 5 3601624 28 28 786712 dict
1 23762 35 2002029 16 44 4874 str
2 16382 24 1346208 10 55 608 tuple
3 644 0 582176 4 60 904 type
4 174 0 554304 4 64 3424 HTTPClientFactory
5 456 0 510720 4 68 1120 DelayedCall
6 3963 5 475560 3 72 120 function
7 3857 5 462840 3 76 120 code
8 240 0 266880 2 78 1112 Method
9 237 0 263544 2 80 1112 Client
10 217 0 211128 1 82 12624 module
11 187 0 209440 1 84 1120 Connector
12 182 0 194624 1 85 1120 Deferred
13 1648 2 179696 1 87 3768 list
14 1530 2 122400 0 88 80 instancemethod
15 343 0 117992 0 89 344 IcecastRadioStation
16 343 0 117992 0 90 344 HTTPExtractor
17 1175 1 103400 0 90 88 weakref
18 1109 1 88720 0 91 80 wrapper_descriptor
19 75 0 83400 0 92 1112 InterfaceClass
顶部:
VIRT RES SHR S %CPU %MEM TIME+ COMMAND
532m 240m 4152 S 54 13.7 4:02.64 python
根据meliae,对象数量不会增加,也不会增加总大小。但是这个过程在这20分钟内消耗了200Mb的驻地记忆。
我也在python上使用了valgrind,但没有发现泄漏。有什么想法吗?
我正在使用Python版本2.6.6,扭曲版本10.2.0
<小时/> 更新#1:我还使用valgrind massif来描述CPython内存使用情况,这里是分配树,分配了99.93%的内存:
99.93% (578,647,287B) (heap allocation functions) malloc/new/new[], --alloc-fns, etc.
->94.69% (548,309,283B) 0x550819: O_cwrite (cStringIO.c:406)
| ->94.69% (548,309,283B) 0x55096F: O_write (cStringIO.c:436)
| ->94.69% (548,309,283B) 0x5A17F9: PyCFunction_Call (methodobject.c:81)
| ->94.69% (548,309,283B) 0x4D1373: call_function (ceval.c:3750)
| ->94.69% (548,309,283B) 0x4CC2A2: PyEval_EvalFrameEx (ceval.c:2412)
| ->94.69% (548,309,283B) 0x4D1868: fast_function (ceval.c:3836)
| ->94.69% (548,309,283B) 0x4D1549: call_function (ceval.c:3771)
| ->94.69% (548,309,283B) 0x4CC2A2: PyEval_EvalFrameEx (ceval.c:2412)
| ->94.69% (548,309,283B) 0x4D1868: fast_function (ceval.c:3836)
| ->94.69% (548,309,283B) 0x4D1549: call_function (ceval.c:3771)
| ->94.69% (548,309,283B) 0x4CC2A2: PyEval_EvalFrameEx (ceval.c:2412)
| ->94.69% (548,309,283B) 0x4D1868: fast_function (ceval.c:3836)
| ->94.69% (548,309,283B) 0x4D1549: call_function (ceval.c:3771)
| ->94.69% (548,309,283B) 0x4CC2A2: PyEval_EvalFrameEx (ceval.c:2412)
| ->94.69% (548,309,283B) 0x4D1868: fast_function (ceval.c:3836)
| ->94.69% (548,309,283B) 0x4D1549: call_function (ceval.c:3771)
| ->94.69% (548,309,283B) 0x4CC2A2: PyEval_EvalFrameEx (ceval.c:2412)
| ->94.69% (548,309,283B) 0x4CEBB3: PyEval_EvalCodeEx (ceval.c:3000)
| ->94.69% (548,309,283B) 0x5A0DC6: function_call (funcobject.c:524)
| ->94.69% (548,309,283B) 0x4261E8: PyObject_Call (abstract.c:2492)
| ->94.69% (548,309,283B) 0x4D2870: ext_do_call (ceval.c:4063)
| ->94.69% (548,309,283B) 0x4CC4E3: PyEval_EvalFrameEx (ceval.c:2452)
| ->94.69% (548,309,283B) 0x4CEBB3: PyEval_EvalCodeEx (ceval.c:3000)
| ->94.69% (548,309,283B) 0x5A0DC6: function_call (funcobject.c:524)
| ->94.69% (548,309,283B) 0x4261E8: PyObject_Call (abstract.c:2492)
| ->94.69% (548,309,283B) 0x4D2870: ext_do_call (ceval.c:4063)
| ->94.69% (548,309,283B) 0x4CC4E3: PyEval_EvalFrameEx (ceval.c:2452)
| ->94.69% (548,309,283B) 0x4CEBB3: PyEval_EvalCodeEx (ceval.c:3000)
| ->94.69% (548,309,283B) 0x5A0DC6: function_call (funcobject.c:524)
| ->94.69% (548,309,283B) 0x4261E8: PyObject_Call (abstract.c:2492)