如何使用Python吃内存?

时间:2011-06-11 18:43:27

标签: python memory memory-management

只是为了实验,而且乐趣...... 我正在尝试创建一个可以“有目的”消耗RAM的应用程序,就像我们立即指定的那样。 例如我想消耗512 MB RAM,然后该应用程序将直接消耗512 MB。

我在网上搜索,他们中的大多数都在使用while循环用变量或数据填充ram。但我认为这是填充RAM的缓慢方式,可能也不准确。

我在python中寻找一个关于内存管理的库。并遇到了这些http://docs.python.org/library/mmap.html。但无法弄清楚如何使用这些库一次性吃掉RAM空间。

我曾经看过一个mem-eater应用程序,但不知道它们是如何编写的......

那么,还有其他更好的建议立即用随机数据填充RAM吗? 或者我应该只使用while循环来手动填充数据,但使用多线程来加快速度?

6 个答案:

答案 0 :(得分:34)

一种简单的方法可能是:

some_str = ' ' * 512000000

在我的测试中似乎工作得很好。

编辑:在Python 3中,您可能希望改为使用bytearray(512000000)

答案 1 :(得分:6)

您将无法使用

等构造分配所有内存
s = ' ' * BIG_NUMBER

最好在

中附加一个列表
a = []
while True:
    print len(a)
    a.append(' ' * 10**6)

这是一个更长的代码,可以更深入地了解内存分配限制:

import os
import psutil

PROCESS = psutil.Process(os.getpid())
MEGA = 10 ** 6
MEGA_STR = ' ' * MEGA

def pmem():
    tot, avail, percent, used, free = psutil.virtual_memory()
    tot, avail, used, free = tot / MEGA, avail / MEGA, used / MEGA, free / MEGA
    proc = PROCESS.get_memory_info()[1] / MEGA
    print('process = %s total = %s avail = %s used = %s free = %s percent = %s'
          % (proc, tot, avail, used, free, percent))

def alloc_max_array():
    i = 0
    ar = []
    while True:
        try:
            #ar.append(MEGA_STR)  # no copy if reusing the same string!
            ar.append(MEGA_STR + str(i))
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    print 'maximum array allocation:', max_i
    pmem()

def alloc_max_str():
    i = 0
    while True:
        try:
            a = ' ' * (i * 10 * MEGA)
            del a
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    _ = ' ' * (max_i * 10 * MEGA)
    print 'maximum string allocation', max_i
    pmem()

pmem()
alloc_max_str()
alloc_max_array()

这是我得到的输出:

process = 4 total = 3179 avail = 2051 used = 1127 free = 2051 percent = 35.5
maximum string allocation 102
process = 1025 total = 3179 avail = 1028 used = 2150 free = 1028 percent = 67.7
maximum array allocation: 2004
process = 2018 total = 3179 avail = 34 used = 3144 free = 34 percent = 98.9

答案 2 :(得分:1)

以下是markolopa的答案版本,对我有用:

import os
import psutil

PROCESS = psutil.Process(os.getpid())
MEGA = 10 ** 6
MEGA_STR = ' ' * MEGA


def pmem():
    try:
        tot, avail, percent, used, free, active, inactive, buffers = psutil.virtual_memory()
    except ValueError:
        tot, avail, percent, used, free, active, inactive, buffers, cached, shared = psutil.virtual_memory()
    tot, avail, used, free = tot / MEGA, avail / MEGA, used / MEGA, free / MEGA
    proc = PROCESS.memory_info()[1] / MEGA
    print('process = %s total = %s avail = %s used = %s free = %s percent = %s'
          % (proc, tot, avail, used, free, percent))


def alloc_max_array():
    i = 0
    ar = []
    while True:
        try:
            #ar.append(MEGA_STR)  # no copy if reusing the same string!
            ar.append(MEGA_STR + str(i))
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    print('maximum array allocation:', max_i)
    pmem()


def alloc_max_str():
    i = 0
    while True:
        try:
            a = ' ' * (i * 10 * MEGA)
            del a
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    _ = ' ' * (max_i * 10 * MEGA)
    print('maximum string allocation', max_i)
    pmem()

pmem()
alloc_max_str()
alloc_max_array()

答案 3 :(得分:1)

您可以执行以下操作分配大量内存:

while True:
    for i in range(0,100000000):
        Gig = 1024*1024*1024*2#A Gig multiplied by 2
        a = 787878788888888888888888888888 * (i * Gig)
        a = a * i
        print str(a)*2

我发现此代码在5分钟内冻结了我的电脑
将其保存在.pyw中以进行后台ram分配
如果它没有冻结你的电脑,请尝试增加a的值
要快速停止,请将此代码保存在.py文件中:

#First we send signals
os.system("TASKKILL /im pythonw.exe")
os.system("TASKKILL /im python.exe") 
print "Forcefull termination"
#Now we forcefully terminate
#pythonw.exe if running in idle or background
os.system("TASKKILL /im python.exe /f")
os.system("TASKKILL /im pythonw.exe /f")
os.system("pause")

答案 4 :(得分:1)

x = bytearray(1024*1024*1000)

可以吃掉大约1GB的内存

答案 5 :(得分:1)

这个函数将内存分配到一个字节对象列表中。列表中的每个项目实际上都是唯一的并且长度相同。该函数还记录其分配。我已经对其进行了高达 3.7 TiB 的测试。它使用 humanfriendly 包,但如果您不想要它,您可以删除它。

它确实使用了循环,但至少它可以让您有选择地自定义在每次迭代中分配多少。例如,您可以将 multiplier_per_allocation 的值提高 8 倍。

import logging
import secrets
from typing import Optional

from humanfriendly import format_size

log = logging.getLogger(__name__)


def fill_memory(*, num_unique_bytes_per_allocation: int = 1024, multiplier_per_allocation: int = 1024 ** 2, max_allocations: Optional[int] = None) -> None:
    """Allocate available memory into a list of effectively unique bytes objects.

    This function is for diagnostic purposes.

    :param num_unique_bytes_per_allocation: Each allocation is created by multiplying a random sequence of bytes of this length.
    :param multiplier_per_allocation: Each allocation is created by multiplying the random sequence of bytes by this number.
    :param max_allocations: Optional number of max allocations.
    """
    # Ref: https://stackoverflow.com/a/66109163/
    num_allocation_bytes = num_unique_bytes_per_allocation * multiplier_per_allocation
    log.info(
        f"Allocating cumulative instances of {num_allocation_bytes:,} bytes ({format_size(num_allocation_bytes)}) each. "
        f"Each allocation uses {num_unique_bytes_per_allocation:,} unique bytes ({format_size(num_unique_bytes_per_allocation)}) "
        f"with a multiplier of {multiplier_per_allocation:,} ({format_size(multiplier_per_allocation)})."
    )

    # Allocate memory
    allocated = []
    num_allocation = 1
    while True:
        unique_bytes_for_allocation = secrets.token_bytes(num_unique_bytes_per_allocation)
        allocated.append(unique_bytes_for_allocation * multiplier_per_allocation)
        num_total_bytes_allocated = num_allocation * num_allocation_bytes
        log.info(f"Used a total of {num_total_bytes_allocated:,} bytes ({format_size(num_total_bytes_allocated)}) via {num_allocation:,} allocations.")
        if max_allocations and (max_allocations == num_allocation):
            break
        num_allocation += 1

示例输出:

>>> import logging
>>> logging.basicConfig(level=logging.INFO)

>>> fill_memory()
INFO:Allocating cumulative instances of 1,073,741,824 bytes (1 GiB) each. Each allocation uses 1,024 unique bytes (1 KiB) with a multiplier of 1,048,576 (1 MiB).
INFO:Used a total of 1,073,741,824 bytes (1 GiB) via 1 allocations.
INFO:Used a total of 2,147,483,648 bytes (2 GiB) via 2 allocations.
INFO:Used a total of 3,221,225,472 bytes (3 GiB) via 3 allocations.
INFO:Used a total of 4,294,967,296 bytes (4 GiB) via 4 allocations.