我有一个m4.4xlarge(64 GB ram)EC2盒子。我用熊猫跑dask。我收到以下内存错误。
我在运行大约24小时后得到这个,这大约是完成任务所需的时间,所以我不确定错误是由于RAM不足,磁盘内存是否结束脚本我执行DF.to_csv()将大型DF写入磁盘或pandas / numpy内部内存限制?
raise(remote_exception(res, tb))
dask.async.MemoryError:
Traceback
---------
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/dask/async.py", line 267, in execute_task
result = _execute_task(task, data)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/dask/async.py", line 248, in _execute_task
args2 = [_execute_task(a, cache) for a in args]
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/dask/async.py", line 249, in _execute_task
return func(*args2)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 4061, in apply
return self._apply_standard(f, axis, reduce=reduce)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/frame.py", line 4179, in _apply_standard
result = result._convert(datetime=True, timedelta=True, copy=False)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/generic.py", line 3004, in _convert
copy=copy)).__finalize__(self)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 2941, in convert
return self.apply('convert', **kwargs)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 2901, in apply
bm._consolidate_inplace()
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 3278, in _consolidate_inplace
self.blocks = tuple(_consolidate(self.blocks))
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 4269, in _consolidate
_can_consolidate=_can_consolidate)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 4289, in _merge_blocks
new_values = _vstack([b.values for b in blocks], dtype)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/pandas/core/internals.py", line 4335, in _vstack
return np.vstack(to_stack)
File "/home/ec2-user/anaconda2/lib/python2.7/site-packages/numpy/core/shape_base.py", line 230, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
更新
根据MRocklin的回答,还有一些额外的信息。
以下是我执行流程的方法:
def dask_stats_calc(dfpath,v1,v2,v3...):
dfpath_ddf = dd.from_pandas(dfpath,npartitions=16,sort=False)
return dfpath_ddf.apply(calculate_stats,axis=1,args=(dfdaily,v1,v2,v3...)).compute(get=get).stack().reset_index(drop=True)
f_threaded = partial(dask_stats_calc,dfpath,v1,v2,v3...,multiprocessing.get)
f_threaded()
现在问题是dfpath
是一个包含140万行的df,所以dfpath_ddf.apply()
运行超过140万行。
整个dfpath_ddf.apply()
完成后会出现df.to_csv()
,但就像你说的那样,定期写入磁盘更好。
现在的问题是,如何每隔200k行实现像磁盘定期写入一样的东西?我想我可以将dfpath_ddf
分解成200k块(或类似的东西)并按顺序运行每个块?
答案 0 :(得分:1)
有时,在等待写入磁盘上的单个文件时,任务会在RAM中累积。使用这样的顺序输出对于并行系统来说本质上是棘手的。如果您需要使用单个文件,那么我建议您尝试使用相同的单线程计算来查看它是否有所作为。
with dask.set_options(get=dask.async.get_sync):
DF.to_csv('out.csv')
或者(也是首选)您可以尝试写出许多 CSV文件。这在调度上要容易得多,因为任务不必等到它们的前任完成才能写入磁盘并从RAM中删除它们。
DF.to_csv('out.*.csv')
因此,并行执行和写入的一种常见且相当健壮的方法是将您的计算和对to_csv
的调用结合起来
ddf = dd.from_pandas(df, npartitions=100)
ddf.apply(myfunc).to_csv('out.*.csv')
这会将您的数据帧拆分为块,在每个块上调用您的函数,将该块写入磁盘,然后删除中间值,释放空间。