我在使用Python释放内存时遇到了麻烦。情况基本上是这样的:我将一个大型数据集拆分为4个文件。每个文件包含5000个numpy形状阵列(3072,412)的列表。我试图将每个数组的第10到第20列提取到一个新列表中。
我想要做的是按顺序读取每个文件,提取我需要的数据,然后释放我正在使用的内存,然后再转到下一个文件。但是,删除对象,将其设置为无并将其设置为0,然后调用gc.collect()
似乎不起作用。这是我正在使用的代码片段:
num_files=4
start=10
end=20
fields = []
for j in range(num_files):
print("Working on file ", j)
source_filename = base_filename + str(j) + ".pkl"
print("Memory before: ", psutil.virtual_memory())
partial_db = joblib.load(source_filename)
print("GC tracking for partial_db is ",gc.is_tracked(partial_db))
print("Memory after loading partial_db:",psutil.virtual_memory())
for x in partial_db:
fields.append(x[:,start:end])
print("Memory after appending to fields: ",psutil.virtual_memory())
print("GC Counts before del: ", gc.get_count())
partial_db = None
print("GC Counts after del: ", gc.get_count())
gc.collect()
print("GC Counts after collection: ", gc.get_count())
print("Memory after freeing partial_db: ", psutil.virtual_memory())
这是几个文件后的输出:
Working on file 0
Memory before: svmem(total=67509161984, available=66177449984,percent=2.0, used=846712832, free=33569669120, active=27423051776, inactive=5678043136, buffers=22843392, cached=33069936640, shared=15945728)
GC tracking for partial_db is True
Memory after loading partial_db: svmem(total=67509161984, available=40785944576, percent=39.6, used=26238181376, free=8014237696, active=54070542336, inactive=4540620800, buffers=22892544, cached=33233850368, shared=15945728)
Memory after appending to fields: svmem(total=67509161984, available=40785944576, percent=39.6, used=26238181376, free=8014237696, active=54070542336, inactive=4540620800, buffers=22892544, cached=33233850368, shared=15945728)
GC Counts before del: (0, 7, 3)
GC Counts after del: (0, 7, 3)
GC Counts after collection: (0, 0, 0)
Memory after freeing partial_db: svmem(total=67509161984, available=40785944576, percent=39.6, used=26238181376, free=8014237696, active=54070542336, inactive=4540620800, buffers=22892544, cached=33233850368, shared=15945728)
Working on file 1
Memory before: svmem(total=67509161984, available=40785944576, percent=39.6, used=26238181376, free=8014237696, active=54070542336, inactive=4540620800, buffers=22892544, cached=33233850368, shared=15945728)
GC tracking for partial_db is True
Memory after loading partial_db: svmem(total=67509161984, available=15378006016, percent=77.2, used=51626561536, free=265465856, active=62507155456, inactive=3761905664, buffers=10330112, cached=15606804480, shared=15945728)
Memory after appending to fields: svmem(total=67509161984, available=15378006016, percent=77.2, used=51626561536, free=265465856, active=62507155456, inactive=3761905664, buffers=10330112, cached=15606804480, shared=15945728)
GC Counts before del: (0, 4, 2)
GC Counts after del: (0, 4, 2)
GC Counts after collection: (0, 0, 0)
Memory after freeing partial_db: svmem(total=67509161984, available=15378006016, percent=77.2, used=51626561536, free=265465856, active=62507155456, inactive=3761905664, buffers=10330112, cached=15606804480, shared=15945728)
如果我继续放手,它将耗尽所有内存并触发MemoryError
例外。
任何人都知道我可以做些什么来确保释放partial_db
使用的数据?
答案 0 :(得分:8)
问题在于:
for x in partial_db:
fields.append(x[:,start:end])
切割numpy数组(与普通的Python列表不同)的原因几乎没有时间,没有浪费的空间是它没有复制,它只是创建了数组内存的另一个视图。通常,这很好。但是在这里,这意味着即使在释放x
之后,你仍然保持x
的记忆存活,因为你永远不会释放那些切片。
还有其他方法,但最简单的方法是只附加切片的副本:
for x in partial_db:
fields.append(x[:,start:end].copy())