pandas MemoryError,而pd.concat

时间:2016-11-02 06:26:48

标签: python python-2.7 pandas

我正在使用熊猫阅读csv' s。

df_from_each_file = (pd.read_csv(StringIO(f)), compression='gzip', dtype=str) for f in all_files)
final_df = pd.concat(df_from_each_file, ignore_index=True)

all_files中的总行数约为90,00,000,但每个文件的大小都较小。

当pd.concat运行时,它没有引用Memory Error

系统有16 GB的RAM和16个CPU的2 GHZ。这里的内存是否充足? 我还有什么办法可以删除MemoryError吗?

我读过关于chunksize等但是每个文件都很小,这应该不是问题。如何使concat免费使用内存错误?

这是追溯。

final_df = pd.concat(df_from_each_file, ignore_index=True)
File "/home/jenkins/fsroot/workspace/ric-dev-sim-2/VENV/lib/python2.7/site-packages/pandas/tools/merge.py", line 1326, in concat
return op.get_result()
File "/home/jenkins/fsroot/workspace/ric-dev-sim-2/VENV/lib/python2.7/site-packages/pandas/tools/merge.py", line 1517, in get_result
copy=self.copy)
File "/home/jenkins/fsroot/workspace/ric-dev-sim-2/VENV/lib/python2.7/site-packages/pandas/core/internals.py", line 4797, in concatenate_block_managers
placement=placement) for placement, join_units in concat_plan]
File "/home/jenkins/fsroot/workspace/ric-dev-sim-2/VENV/lib/python2.7/site-packages/pandas/core/internals.py", line 4902, in concatenate_join_units
concat_values = _concat._concat_compat(to_concat, axis=concat_axis)
File "/home/jenkins/fsroot/workspace/ric-dev-sim-2/VENV/lib/python2.7/site-packages/pandas/types/concat.py", line 165, in _concat_compat
return np.concatenate(to_concat, axis=axis)
MemoryError

1个文件的df.info是

dtype: object<class 'pandas.core.frame.DataFrame'>
RangeIndex: 12516 entries, 0 to 12515
Columns: 322 entries, #RIC to Reuters Classification Scheme.1
dtypes: object(322)
memory usage: 30.7+ MB
None

1 个答案:

答案 0 :(得分:2)

首先,除非您真的需要,否则不要使用trickle -s -v -u2500 myexecutable ...args 参数。

如果您将使用此方法,那么查看your next question您需要至少2 * 90GB = 180GB的RAM用于9M行(90GB用于生成的DF加列表90GB用于连接的DF列表):

计算dtype=str

17.1GB / 1713078 * (9*10**6) / 1GB

因此,您必须处理每个文件的数据文件并将其保存为可以处理大量数据的内容 - 我会使用HDF或MySQL / PostgreSQL等数据库:

In [18]: 17.1*1024**3/1713078*(9*10**6)/1024**3
Out[18]: 89.8382910760631