合并两个大数据框会导致内存错误

时间:2019-02-14 08:59:50

标签: python pandas

enter image description here-表格图片 我正在尝试合并两个非常大的数据框,这给了我内存错误。这是我要尝试将其转换为熊猫的SQL代码。

SELECT a.period, a.houseid, a.custid, a.productid, b.local_time
FROM table_a
JOIN table_b
  ON a.period = b.period
  AND a.productid = b.productid
  AND b.local_time BETWEEN a.start_time AND a.end_time

Table_aTable_b包含以百万计的行。 尝试将表与键连接,并且还尝试将table_b中的本地时间设置在表a中的开始时间和结束时间之间。

  

DF1:

period  houseid custid prodid localtime     
20181001    1   aa  2   01/10/2018 19:04    
20181001    1   zz  9   01/10/2018 15:57    
20181001    1   zz  178 01/10/2018 13:01    
20181001    1   zz  231 02/10/2018 02:51
  

DF2:

PERIOD    prodid   Name Product_info    START_TIME  END_TIME

20181001    2   Xab GHI 01/10/2018 19:00    01/10/2018 19:29
20181001    2   Xab QQQ 01/10/2018 19:30    01/10/2018 19:59
20181001    2   Xab asd 01/10/2018 20:00    01/10/2018 20:29
20181001    9   S2  Angele  01/10/2018 14:00    01/10/2018 14:59
20181001    9   S2  Road    01/10/2018 15:00    01/10/2018 15:59
20181001    9   S2  Flash   01/10/2018 16:00    01/10/2018 16:59
20181001    9   S2  Simpson 01/10/2018 17:00    01/10/2018 17:29
20181001    178 T3  Chase   01/10/2018 13:00    01/10/2018 13:59
20181001    178 T3  Chase   01/10/2018 14:00    01/10/2018 14:59
20181001    178 T3  Elaine  01/10/2018 15:00    01/10/2018 15:59
  

DF1的结果:

period  houseid custid   prodid    localtime Product_info Name

20181001    1   aa  2   01/10/2018 19:04    GHI     Xab
20181001    1   zz  9   01/10/2018 15:57    Road    S2
20181001    1   zz  178 01/10/2018 13:01    Chase   T3
20181001    1   zz  231 02/10/2018 02:51    None    None

请帮助我。 谢谢

1 个答案:

答案 0 :(得分:0)

好的,这是我的解决方案。希望它对您的情况足够好。这就是我目前所能提供的一切。另一种方法是循环遍历一个表,并在START_TIME和END_TIME之间应用条件检查,但由于您说表中有数百万行,因此我决定采用这种方法。

此处的步数取决于DF2中START_TIME的bin。我的解决方案需要2个步骤,因为我首先在半小时的START_TIME参加,然后在每小时的START_TIME重复一次。

import pandas as pd
import datetime


df1 = pd.read_excel('my_sample_data.xls')
df2 = pd.read_excel('my_sample_data2.xls')

# Construct a new index column for df1 - based on half-hourly START_TIME
df1['localtime'] = pd.to_timedelta(df1.loc[:, 'localtime'])
df1['START_TIME'] = df1.loc[:,'localtime'].apply(lambda x: x.floor('1800s'))
df1['START_TIME'] = df1.loc[:,'START_TIME'].apply(lambda x: x + datetime.datetime(1970,1,1))

# Drop unneeded colums
df2 = df2.loc[:,['START_TIME', 'prodid', 'Product_info', 'Name']]
df2.set_index(['prodid', 'START_TIME'], inplace=True)

df = df1.join(df2, on=['prodid', 'START_TIME'])
# Good portion
df_done = df.loc[df['Name'].isnull() == False]
# Bad portion
df_nan = df.loc[df['Name'].isnull() == True, ['period', 'houseid', 'custid', 'prodid', 'localtime']]

# Some ranges in DF2 come with hourly frequencies. Repeat the same process above for this case
df_nan['START_TIME'] = df_nan.loc[:,'localtime'].apply(lambda x: x.floor('3600s'))
df_nan['START_TIME'] = df_nan.loc[:,'START_TIME'].apply(lambda x: x + datetime.datetime(1970,1,1))
df_nan = df_nan.join(df2, on=['prodid', 'START_TIME'])

df = pd.concat([df_done, df_nan])
df['localtime'] = df.loc[:, 'localtime'].apply(lambda x: x + datetime.datetime(1970,1,1))

>>>df
     period  houseid custid  prodid           localtime          START_TIME Product_info Name
0  20181001        1     aa       2 2018-01-10 19:04:00 2018-01-10 19:00:00          GHI  Xab
2  20181001        1     zz     178 2018-01-10 13:01:00 2018-01-10 13:00:00        Chase   T3
1  20181001        1     zz       9 2018-01-10 15:57:00 2018-01-10 15:00:00         Road   S2
3  20181001        1     zz     231 2018-02-10 02:51:00 2018-02-10 02:00:00          NaN  NaN