我有一个非常简单的python脚本,它读取CSV文件并根据时间戳对行进行排序。但是,该文件足够大(16 GB),其读取完全使用RAM内存。当它达到100%(即64 GB RAM内存)时,我的系统完全冻结,我被迫重新启动计算机。
以下是代码:
import pandas as pd
from time import time
filename = 'AKER_OB.csv'
start_ = time()
file_ = pd.read_csv(filename)
end_ = time()
duration = end_ - start_
print("The duration to load that file : {}".format(duration))
file_.to_datetime(df['TimeStamps'], format="%Y-%m-%d %H:%M:%S").sort_values()
AKER_OB.csv
的负责人:
TimeStamp,Bid1,BidSize1,Bid2,BidSize2,Bid3,BidSize3,Bid4,BidSize4,Bid5,BidSize5,Bid6,BidSize6,Bid7,BidSize7,Bid8,BidSize8,Bid9,BidSize9,Bid10,BidSize10,Bid11,BidSize11,Bid12,BidSize12,Bid13,BidSize13,Bid14,BidSize14,Bid15,BidSize15,Bid16,BidSize16,Bid17,BidSize17,Bid18,BidSize18,Bid19,BidSize19,Bid20,BidSize20,Ask1,AskSize1,Ask2,AskSize2,Ask3,AskSize3,Ask4,AskSize4,Ask5,AskSize5,Ask6,AskSize6,Ask7,AskSize7,Ask8,AskSize8,Ask9,AskSize9,Ask10,AskSize10,Ask11,AskSize11,Ask12,AskSize12,Ask13,AskSize13,Ask14,AskSize14,Ask15,AskSize15,Ask16,AskSize16,Ask17,AskSize17,Ask18,AskSize18,Ask19,AskSize19,Ask20,AskSize20
2016-10-08 00:00:00,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:01,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:02,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:03,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:05,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:06,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:07,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
2016-10-08 00:00:08,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
解决此问题的正确方法是什么?我们将非常感谢代码片段的完整答案。
答案 0 :(得分:1)
基本上,你必须实现自己的内存不足排序。
使用Pandas CSV chunker将文件拆分成两个或多个部分,对每个部分进行排序(一次一个!),将其保存到单独的CSV文件中,然后使用{{1}释放内存}。
通过使用CSV分组打开所有已保存的预排序文件,根据需要组合来自块的行,并将排序的行附加到输出文件,合并已排序的文件。
答案 1 :(得分:0)
只需按块拆分文件读取即可。一个similar case。
还要考虑将交换分区或文件添加到您的操作系统,这将有助于在其他情况下将问题从RAM中删除。