大型pandas DataFrames上的外部合并会导致MemoryError ---如何将“大数据”与pandas合并?

时间:2016-10-03 05:16:28

标签: python pandas memory dataframe out-of-memory

我有两个pandas DataFrames df1df2,格式相当标准:

   one  two  three   feature
A    1    2      3   feature1
B    4    5      6   feature2  
C    7    8      9   feature3   
D    10   11     12  feature4
E    13   14     15  feature5 
F    16   17     18  feature6 
...

df2的格式相同。这些DataFrame的大小约为175MB和140 MB。

merged_df = pd.merge(df1, df2, on='feature', how='outer', suffixes=('','_features'))

我得到以下MemoryError:

File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 39, in merge
    return op.get_result()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 217, in get_result
    join_index, left_indexer, right_indexer = self._get_join_info()
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 353, in _get_join_info
    sort=self.sort, how=self.how) 
File "/nfs/sw/python/python-3.5.1/lib/python3.5/site-packages/pandas/tools/merge.py", line 559, in _get_join_indexers
    return join_func(lkey, rkey, count, **kwargs)
File "pandas/src/join.pyx", line 187, in pandas.algos.full_outer_join (pandas/algos.c:61680)
  File "pandas/src/join.pyx", line 196, in pandas.algos._get_result_indexer (pandas/algos.c:61978)
MemoryError

合并时是否有可能存在pandas数据帧的“大小限制”?我很惊讶这不起作用。也许这是某个熊猫版本中的错误?

编辑:正如评论中所提到的,合并列中的许多重复项很容易导致RAM问题。请参阅:Python Pandas Merge Causing Memory Overflow

现在的问题是,我们如何进行合并?似乎最好的方法是以某种方式对数据帧进行分区。

2 个答案:

答案 0 :(得分:2)

您可以尝试按unique值,df1和最后concat输出首先过滤merge

如果只需要外连接,我认为还有内存问题。但是如果为每个循环的过滤器输出添加一些其他代码,它可以工作。

dfs = []
for val in df.feature.unique():
    df1 = pd.merge(df[df.feature==val], df2, on='feature', how='outer', suffixes=('','_key'))
    #http://stackoverflow.com/a/39786538/2901002
    #df1 = df1[(df1.start <= df1.start_key) & (df1.end <= df1.end_key)]
    print (df1)
    dfs.append(df1)

df = pd.concat(dfs, ignore_index=True)
print (df)

其他解决方案是使用dask.dataframe.DataFrame.merge

答案 1 :(得分:1)

尝试为数字列指定数据类型以减小现有数据框的大小,例如:

df[['one','two', 'three']] = df[['one','two', 'three']].astype(np.int32)

这应该会显着减少内存,并希望您可以预先进行合并。