有没有办法加速以下pandas for loop?

时间:2018-03-16 10:06:44

标签: python pandas numpy

我的data框架包含10,000,000行!分组后,仍然有大约9,000,000个子帧循环。

代码是:

data = read.csv('big.csv')
for id, new_df in data.groupby(level=0): # look at mini df and do some analysis
    # some code for each of the small data frames

这样效率极低,而且代码已经运行了10多个小时。

有没有办法加快速度?

完整代码:

d = pd.DataFrame() # new df to populate
print 'Start of the loop'
for id, new_df in data.groupby(level=0):
    c = [new_df.iloc[i:] for i in range(len(new_df.index))]
    x = pd.concat(c, keys=new_df.index).reset_index(level=(2,3), drop=True).reset_index()
    x = x.set_index(['level_0','level_1', x.groupby(['level_0','level_1']).cumcount()])
    d = pd.concat([d, x])

获取数据:

data = pd.read_csv('https://raw.githubusercontent.com/skiler07/data/master/so_data.csv', index_col=0).set_index(['id','date'])

注意:

大多数id只有1个日期。这表明只有1次访问。对于访问次数较多的ID,我希望以3d格式构建它们,例如将所有访问次数存储在第二维度中。输出为(ID,访问次数,功能)

3 个答案:

答案 0 :(得分:3)

这是提高速度的一种方法。这会在一些直接处理行的代码中添加所需的新行。这节省了不断构建小型数据帧的开销。您的100,000行样本在我的机器上运行几秒钟。虽然您的代码只包含10,000行样本数据,但100秒这似乎代表了几个数量级的改进。

代码:

model

测试代码:

def make_3d(csv_filename):

    def make_3d_lines(a_df):
        a_df['depth'] = 0
        depth = 0
        prev = None
        accum = []
        for row in a_df.values.tolist():
            row[0] = 0
            key = row[1]
            if key == prev:
                depth += 1
                accum.append(row)
            else:
                if depth == 0:
                    yield row
                else:
                    depth = 0
                    to_emit = []
                    for i in range(len(accum)):
                        date = accum[i][2]
                        for j, r in enumerate(accum[i:]):
                            to_emit.append(list(r))
                            to_emit[-1][0] = j
                            to_emit[-1][2] = date
                    for r in to_emit[1:]:
                        yield r
                accum = [row]
            prev = key

    df_data = pd.read_csv('big-data.csv')
    df_data.columns = ['depth'] + list(df_data.columns)[1:]

    new_df = pd.DataFrame(
        make_3d_lines(df_data.sort_values('id date'.split())),
        columns=df_data.columns
    ).astype(dtype=df_data.dtypes.to_dict())

    return new_df.set_index('id date'.split())

结果:

start_time = time.time()
df = make_3d('big-data.csv')
print(time.time() - start_time)

df = df.drop(columns=['feature%d' % i for i in range(3, 25)])
print(df[df['depth'] != 0].head(10))

答案 1 :(得分:1)

我相信您对功能工程的方法并不是最好的,但我会坚持回答您的问题。

在Python中,迭代字典比使用DataFrame更快。

这里我是如何设法处理一个巨大的DataFrame(~100,000,000行):

# this to reset index and get level 0 back as a column in your dataset
df = data.reset_index()  # index will be (id, date)

# split the DataFrame based on id
# and store it as sub-dataframes in a dictionary where id is the key
d = dict(tuple(df.groupby('id')))

# process  
for key, value in d.items():

        # do something with value
        # value is a sub-dataframe where id is unique

答案 2 :(得分:0)

修改了@Stephen的代码

def make_3d(dataset):

    def make_3d_lines(a_df):
        a_df['depth'] = 0 # sets all depth from (1 to n) to 0
        depth = 1 # initiate from 1, so that the first loop is correct
        prev = None
        accum = [] # accumulates blocks of data belonging to given user
        for row in a_df.values.tolist(): # for each row in our dataset
            row[0] = 0 # NOT SURE
            key = row[1] # this is the id of the row
            if key == prev: # if this rows id matches previous row's id, append together 
                depth += 1 
                accum.append(row)
            else: # else if this id is new, previous block is completed -> process it
                if depth == 0: # previous id appeared only once -> get that row from accum
                    yield accum[0] # also remember that depth = 0
                else: # process the block and emit each row
                    depth = 0
                    to_emit = [] # prepare to emit the list
                    for i in range(len(accum)): # for each unique day in the accumulated list
                        date = accum[i][2] # define date to be the first date it sees
                        for j, r in enumerate(accum[i:]):
                            to_emit.append(list(r))
                            to_emit[-1][0] = j # define the depth
                            to_emit[-1][2] = date # define the 
                    for r in to_emit[0:]:
                        yield r
                accum = [row]
            prev = key

    df_data = dataset.reset_index()
    df_data.columns = ['depth'] + list(df_data.columns)[1:]

    new_df = pd.DataFrame(
        make_3d_lines(df_data.sort_values('id date'.split(), ascending=[True,False])),
        columns=df_data.columns
    ).astype(dtype=df_data.dtypes.to_dict())

    return new_df.set_index('id date'.split())

<强>测试

t = pd.DataFrame(data={'id':[1,1,1,1,2,2,3,3,4,5], 'date':[20180311,20180310,20180210,20170505,20180312,20180311,20180312,20180311,20170501,20180304], 'feature':[10,20,45,1,14,15,20,20,13,11],'result':[1,1,0,0,0,0,1,0,1,1]})
t = t.reindex(columns=['id','date','feature','result'])
print t 
              id     date      feature      result
0              1  20180311          10           1
1              1  20180310          20           1
2              1  20180210          45           0
3              1  20170505           1           0
4              2  20180312          14           0
5              2  20180311          15           0
6              3  20180312          20           1
7              3  20180311          20           0
8              4  20170501          13           1
9              5  20180304          11           1

<强>输出

                        depth     feature      result
id            date                                   
1             20180311      0          10           1
              20180311      1          20           1
              20180311      2          45           0
              20180311      3           1           0
              20180310      0          20           1
              20180310      1          45           0
              20180310      2           1           0
              20180210      0          45           0
              20180210      1           1           0
              20170505      0           1           0
2             20180312      0          14           0
              20180312      1          15           0
              20180311      0          15           0
3             20180312      0          20           1
              20180312      1          20           0
              20180311      0          20           0
4             20170501      0          13           1