MemoryError熊猫

时间:2018-08-24 22:47:48

标签: python pandas

我正在尝试使用pandas合并两个data.frame,但是却出现内存错误。这可能是内存问题,因为我的文件有〜40,000,000行(df1)和80,000,000行和5列(df2a),但是,当我尝试合并其他类似文件的90,000,000行和5列时(df2b),合并就可以了。

这是我的代码:

# Merge the files with pandas python
import pandas as pd

# Read lookup file from GTEx
df1 = pd.read_table("GTEx.lookup_table.txt.gz", compression="gzip", sep="\t", header=0)
df1.columns = df1.columns.str.replace('rs_id_dbSNP147_GRCh37p13', 'rsid')

df2a = pd.read_table("Proximal.nominals.FULL.txt.gz", sep=" ", header=None, compression="gzip") # this file gives the Memory error
df2b = pd.read_table("Proximal.nominals2.FULL.txt.gz", sep=" ", header=None, compression="gzip") # this file merges just fine
df2a_merge = pd.merge(left=df1, right=df2a, left_on="rsid",  right_on='rsid')
df2b_merge = pd.merge(left=df1, right=df2b, left_on="rsid",  right_on='rsid')

我已经查看了每个文件使用的内存量,但是df2b占用了更多的内存,但是仍然可以很好地合并:

>>>print("df2a dataset uses ",df2a.memory_usage().sum()/ 1024**2," MB ")
  ('df2a dataset uses ', 3342, ' MB ')

>>>print("df2b dataset uses ",df2b.memory_usage().sum()/ 1024**2," MB ")
  ('df2b dataset uses ', 3470, ' MB ')

此外,df2a2f2b中的数据类型相同:

gene_id      object
rsid         object
distance      int64
n_pval      float64
nslope       float64
dtype: object

这是我得到的错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/users/jfertaj/python/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 54, in merge
return op.get_result()
  File "/users/jfertaj/python/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 569, in get_result
join_index, left_indexer, right_indexer = self._get_join_info()
  File "/users/jfertaj/python/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 734, in _get_join_info
right_indexer) = self._get_join_indexers()
  File "/users/jfertaj/python/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 713, in _get_join_indexers
how=self.how)
  File "/users/jfertaj/python/lib/python2.7/site-packages/pandas/core/reshape/merge.py", line 998, in _get_join_indexers
return join_func(lkey, rkey, count, **kwargs)
  File "pandas/_libs/join.pyx", line 71, in pandas._libs.join.inner_join (pandas/_libs/join.c:120300)

顺便说一句,我想进行内部合并

1 个答案:

答案 0 :(得分:0)

对于此类大型数据框,我建议使用dask包。
特别是,请参见其DataFrame,这是一种处理大熊猫DataFrame并并行化其上计算的方法。

您的代码可以修改如下:

import dask.dataframe as dd

dd1 = dd.from_pandas(df1, npartitions=10)
dd2a = dd.from_pandas(df2a, npartitions=10)

dd2a_merge = dd1.merge(dd2a, left_on="rsid",  right_on='rsid')
dd2a_merge = dd2a_merge.compute()
相关问题