While reading an article on memeory management in python came across few doubts:
import copy
import memory_profiler
@profile
def function():
x = list(range(1000000)) # allocate a big list
y = copy.deepcopy(x)
del x
return y
if __name__ == "__main__":
function()
$:python -m memory_profiler memory_profiler_demo.py
Filename: memory_profiler_demo.py
Line # Mem usage Increment Line Contents
================================================
4 30.074 MiB 30.074 MiB @profile
5 def function():
6 61.441 MiB 31.367 MiB x = list(range(1000000)) # allocate a big list
7 111.664 MiB 50.223 MiB y = copy.deepcopy(x)#doubt 1
8 103.707 MiB -7.957 MiB del x #doubt 2
9 103.707 MiB 0.000 MiB return
so i have the doubts on line 7 why it took more size to copy the list and second doubt on line 8 why it only frees 7 MiB.
答案 0 :(得分:1)
First, let's start with why line 8 only frees 7MiB.
Once you allocate a bunch of memory, Python and your OS and/or malloc library both guess that you're likely to allocate a bunch of memory again. On modern platforms, it's a lot faster to reuse that memory in-process than to release it and reallocate it from scratch, while it costs very little to keep extra unused pages of memory in your process's space, so it's usually the right tradeoff. (But of course usually != always, and the blog you linked seems to be in large part about how to work out that you're building an application where it's not the right tradeoff and what to do about it.)
A default build of CPython on Linux virtually never releases any memory. On other POSIX (including Mac) it almost never releases any memory. On Windows, it does release memory more often—but there are still constraints. Basically, if a single allocation from Windows has any piece in use (or even in the middle of a freelist chain), that allocation can't be returned to Windows. So, if you're fragmenting memory (which you usually are), that memory can't be freed. The blog post you linked to explains this to some extent, and there are much better resources than an SO answer to explain further.
If you really do need to allocate a lot of memory for a short time, release it, and never use it again, without holding onto all those pages, there's a common Unix idiom for that—you fork
, then do the short-term allocation in the child and exit after passing back the small results in some way. (In Python, that usually means using multiprocessing.Process
instead of os.fork
directly.)
Now, why does your deepcopy
take more memory than the initial construction?
I tested your code on my Mac laptop with python.org builds of 2.7, 3.5, and 3.6. What I found was that the list construction takes around 38MiB (similar to what you're seeing), while the copy takes 42MiB on 2.7, 31MiB on 3.5, and 7MB on 3.6.
Slightly oversimplified, here's the 2.7 behavior: The functions in copy
just call the type's constructor on an iterable of the elements (for copy
) or recursive copies of them (for deepcopy
). For list
, this means creating a list with a small starting capacity and then expanding it as it appends. That means you're not just creating a 1M-length array, you're also creating and throwing away arrays of 500K, 250K, etc. all the way down. The sum of all those lengths is equivalent to a 2M-length array. Of course you don't really need the sum of all of them—only the most recent array and the new one are ever live at the same time—but there's no guarantee the old arrays will be freed in a useful way that lets them get reused. (That might explain why I'm seeing about 1.5x the original construction while you're seeing about 2x, but I'd need a lot more investigation to bet anything on that part…)
In 3.5, I believe the biggest difference is that a number of improvements over the 5 years since 2.7 mean that most of those expansions now get done by realloc
, even if there is free memory in the pool that could be used instead. That changes a tradeoff that favored 32-bit over 64-bit on modern platforms into one that works the other way round—in 64-bit linux/Mac/Windows: there are often going to be free pages that can be tossed onto the end of an existing large alloc without remapping its address, so most of those realloc
s mean no waste.
In 3.6, the huge change is probably #26167. Oversimplifying again, the list
type knows how to copy itself by allocating all in one go, and the copy
methods now take advantage of that for list
and a few other builtin types. Sometimes there's no reallocation at all, and even when there is, it's usually with the special-purpose LIST_APPEND
code (which can be used when you can assume nobody outside the current function has access to the list yet) instead of the general-purpose list.append
code (which can't).