我想知道在调用函数期间分配的最大RAM量是多少(在Python中)。关于跟踪RAM使用情况的SO还有其他问题:
Which Python memory profiler is recommended?
How do I profile memory usage in Python?
但是这些似乎允许您在调用heap()
方法(在guppy的情况下)时更多地跟踪内存使用情况。但是,我想跟踪的是外部库中的一个函数,我无法修改它,并且它会增长以使用大量的RAM,但是一旦函数执行完成就会释放它。有没有办法找出函数调用期间使用的RAM总量是多少?
答案 0 :(得分:25)
这个问题看起来很有趣,这让我有理由去研究Guppy / Heapy,因为我感谢你。
我尝试了大约2个小时让Heapy监控一个函数调用/进程而不用零运气来修改它的源代码。
我确实找到了使用内置Python库resource
完成任务的方法。请注意,文档未指出RU_MAXRSS
值返回的内容。另一个以KB为单位的SO用户noted。在下面的测试代码中运行Mac OSX 7.3并观察我的系统资源,我相信返回的值是 Bytes ,而不是kBytes。
关于我如何使用resource
库监视库调用的10000英尺视图是在一个单独的(可监视的)线程中启动该函数,并在主线程中跟踪该进程的系统资源。下面我有两个你需要运行的文件来测试它。
图书馆资源监控 - whatever_you_want.py
import resource
import time
from stoppable_thread import StoppableThread
class MyLibrarySniffingClass(StoppableThread):
def __init__(self, target_lib_call, arg1, arg2):
super(MyLibrarySniffingClass, self).__init__()
self.target_function = target_lib_call
self.arg1 = arg1
self.arg2 = arg2
self.results = None
def startup(self):
# Overload the startup function
print "Calling the Target Library Function..."
def cleanup(self):
# Overload the cleanup function
print "Library Call Complete"
def mainloop(self):
# Start the library Call
self.results = self.target_function(self.arg1, self.arg2)
# Kill the thread when complete
self.stop()
def SomeLongRunningLibraryCall(arg1, arg2):
max_dict_entries = 2500
delay_per_entry = .005
some_large_dictionary = {}
dict_entry_count = 0
while(1):
time.sleep(delay_per_entry)
dict_entry_count += 1
some_large_dictionary[dict_entry_count]=range(10000)
if len(some_large_dictionary) > max_dict_entries:
break
print arg1 + " " + arg2
return "Good Bye World"
if __name__ == "__main__":
# Lib Testing Code
mythread = MyLibrarySniffingClass(SomeLongRunningLibraryCall, "Hello", "World")
mythread.start()
start_mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
delta_mem = 0
max_memory = 0
memory_usage_refresh = .005 # Seconds
while(1):
time.sleep(memory_usage_refresh)
delta_mem = (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) - start_mem
if delta_mem > max_memory:
max_memory = delta_mem
# Uncomment this line to see the memory usuage during run-time
# print "Memory Usage During Call: %d MB" % (delta_mem / 1000000.0)
# Check to see if the library call is complete
if mythread.isShutdown():
print mythread.results
break;
print "\nMAX Memory Usage in MB: " + str(round(max_memory / 1000.0, 3))
可停止线程 - stoppable_thread.py
import threading
import time
class StoppableThread(threading.Thread):
def __init__(self):
super(StoppableThread, self).__init__()
self.daemon = True
self.__monitor = threading.Event()
self.__monitor.set()
self.__has_shutdown = False
def run(self):
'''Overloads the threading.Thread.run'''
# Call the User's Startup functions
self.startup()
# Loop until the thread is stopped
while self.isRunning():
self.mainloop()
# Clean up
self.cleanup()
# Flag to the outside world that the thread has exited
# AND that the cleanup is complete
self.__has_shutdown = True
def stop(self):
self.__monitor.clear()
def isRunning(self):
return self.__monitor.isSet()
def isShutdown(self):
return self.__has_shutdown
###############################
### User Defined Functions ####
###############################
def mainloop(self):
'''
Expected to be overwritten in a subclass!!
Note that Stoppable while(1) is handled in the built in "run".
'''
pass
def startup(self):
'''Expected to be overwritten in a subclass!!'''
pass
def cleanup(self):
'''Expected to be overwritten in a subclass!!'''
pass
答案 1 :(得分:18)
可以使用memory_profiler执行此操作。函数memory_usage
返回值列表,这些值表示一段时间内的内存使用情况(默认情况下为.1秒的块)。如果您需要最大值,请使用该列表的最大值。小例子:
from memory_profiler import memory_usage
from time import sleep
def f():
# a function that with growing
# memory consumption
a = [0] * 1000
sleep(.1)
b = a * 100
sleep(.1)
c = b * 100
return a
mem_usage = memory_usage(f)
print('Memory usage (in chunks of .1 seconds): %s' % mem_usage)
print('Maximum memory usage: %s' % max(mem_usage))
在我的情况下(memory_profiler 0.25),如果输出以下输出:
Memory usage (in chunks of .1 seconds): [45.65625, 45.734375, 46.41015625, 53.734375]
Maximum memory usage: 53.734375
答案 2 :(得分:5)
这似乎适用于Windows。不了解其他操作系统。
In [50]: import os
In [51]: import psutil
In [52]: process = psutil.Process(os.getpid())
In [53]: process.get_ext_memory_info().peak_wset
Out[53]: 41934848
答案 3 :(得分:2)
您可以使用python库资源来获取内存。
import resource
resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
它将以千字节为单位提供内存使用量,以MB为单位转换为1000。
答案 4 :(得分:1)
@Vader B的答案有所改善(因为它对我来说开箱即用):
1byte = 1glyph
答案 5 :(得分:1)
在Linux系统上读取free
信息的来源/proc/meminfo
:
~ head /proc/meminfo
MemTotal: 4039168 kB
MemFree: 2567392 kB
MemAvailable: 3169436 kB
Buffers: 81756 kB
Cached: 712808 kB
SwapCached: 0 kB
Active: 835276 kB
Inactive: 457436 kB
Active(anon): 499080 kB
Inactive(anon): 17968 kB
我创建了一个装饰器类来衡量函数的内存消耗。
class memoryit:
def FreeMemory():
with open('/proc/meminfo') as file:
for line in file:
if 'MemFree' in line:
free_memKB = line.split()[1]
return (float(free_memKB)/(1024*1024)) # returns GBytes float
def __init__(self, function): # Decorator class to print the memory consumption of a
self.function = function # function/method after calling it a number of iterations
def __call__(self, *args, iterations = 1, **kwargs):
before = memoryit.FreeMemory()
for i in range (iterations):
result = self.function(*args, **kwargs)
after = memoryit.FreeMemory()
print ('%r memory used: %2.3f GB' % (self.function.__name__, (before - after) / iterations))
return result
衡量消费的功能:
@memoryit
def MakeMatrix (dim):
matrix = []
for i in range (dim):
matrix.append([j for j in range (dim)])
return (matrix)
用法:
print ("Starting memory:", memoryit.FreeMemory())
m = MakeMatrix(10000)
print ("Ending memory:", memoryit.FreeMemory() )
打印输出:
Starting memory: 10.58599853515625
'MakeMatrix' memory used: 3.741 GB
Ending memory: 6.864116668701172
答案 6 :(得分:0)
标准Unix实用程序time
跟踪进程的最大内存使用情况以及程序的其他有用统计信息。
示例输出({maxresident
是最大内存使用量,以千字节为单位。):
> time python ./scalabilty_test.py
45.31user 1.86system 0:47.23elapsed 99%CPU (0avgtext+0avgdata 369824maxresident)k
0inputs+100208outputs (0major+99494minor)pagefaults 0swaps
答案 7 :(得分:-2)
也一直在努力完成这项任务。在尝试了Adam的psutil和方法之后,我写了一个函数(信用于Adam Lewis)来测量特定函数使用的内存。人们可能会发现抓取和使用更容易。
我发现关于线程和覆盖超类的材料真的有助于理解Adam在脚本中做了什么。抱歉,由于我的" 2链接"我无法发布链接。最大限制。