如何在pandas索引中累积唯一的列总和

时间:2013-04-05 04:36:10

标签: python numpy pandas

我有一个pandas DateFrame,我用

创建的df
df = pd.read_table('sorted_df_changes.txt', index_col=0, parse_dates=True, names=['date', 'rev_id', 'score'])

的结构如下:

                     page_id     score  
date
2001-05-23 19:50:14  2430        7.632989
2001-05-25 11:53:55  1814033     18.946234
2001-05-27 17:36:37  2115        3.398154
2001-08-04 21:00:51  311         19.386016
2001-08-04 21:07:42  314         14.886722

date是索引,类型为DatetimeIndex。

每个page_id可能会出现在一个或多个日期(非唯一)中,并且大小约为100万。所有页面组成了文档

我需要在每个日期获得整个文档的分数,同时只计算任何给定page_id的最新分数。

实施例

示例数据

                     page_id     score  
date
2001-05-23 19:50:14  1           3
2001-05-25 11:53:55  2           4
2001-05-27 17:36:37  1           5
2001-05-28 19:36:37  1           1

示例解决方案

                     score  
date
2001-05-23 19:50:14  3
2001-05-25 11:53:55  7 (3 + 4)
2001-05-27 17:36:37  9 (5 + 4)
2001-05-28 19:36:37  5 (1 + 4)

2的条目被连续计算,因为它没有被重复,但每次重复id 1时,新的分数将取代旧的分数。

4 个答案:

答案 0 :(得分:3)

修改

最后,我找到了一个不需要循环的解决方案:

df.score.groupby(df.page_id).transform(lambda s:s.diff().combine_first(s)).cumsum()

我认为需要一个for循环:

from StringIO import StringIO
txt = """date,page_id,score
2001-05-23 19:50:14,  1,3
2001-05-25 11:53:55,  2,4
2001-05-27 17:36:37,  1,5
2001-05-28 19:36:37,  1,1
2001-05-28 19:36:38,  3,6
2001-05-28 19:36:39,  3,9
"""

df = pd.read_csv(StringIO(txt), index_col=0)

def score_sum_py(page_id, scores):
    from itertools import izip
    score_sum = 0
    last_score = [0]*(np.max(page_id)+1)
    result = np.empty_like(scores)
    for i, (pid, score) in enumerate(izip(page_id, scores)):
        score_sum = score_sum - last_score[pid] + score
        last_score[pid] = score
        result[i] = score_sum
    result.name = "score_sum"
    return result

print score_sum_py(pd.factorize(df.page_id)[0], df.score)

输出:

date
2001-05-23 19:50:14     3
2001-05-25 11:53:55     7
2001-05-27 17:36:37     9
2001-05-28 19:36:37     5
2001-05-28 19:36:38    11
2001-05-28 19:36:39    14
Name: score_sum

如果python中的循环很慢,你可以尝试将两个系列的page_id,得分首先转换为python列表,循环遍历列表,并使用python的原生整数进行计算可能更快。

如果速度很重要,你也可以试试cython:

%%cython
cimport cython
cimport numpy as np
import numpy as np

@cython.wraparound(False) 
@cython.boundscheck(False)
def score_sum(np.ndarray[int] page_id, np.ndarray[long long] scores):
    cdef int i
    cdef long long score_sum, pid, score
    cdef np.ndarray[long long] last_score, result

    score_sum = 0
    last_score = np.zeros(np.max(page_id)+1, dtype=np.int64)
    result = np.empty_like(scores)

    for i in range(len(page_id)):
        pid = page_id[i]
        score = scores[i]
        score_sum = score_sum - last_score[pid] + score
        last_score[pid] = score
        result[i] = score_sum

    result.name = "score_sum"
    return result

在这里,我使用pandas.factorize()page_id转换为0和N范围内的数组。其中N是page_id中唯一的元素数。您还可以使用dict缓存每个page_id的last_score而不使用pandas.factorize()

答案 1 :(得分:2)

另一种数据结构使得这种计算更容易推理,性能不如其他答案好,但我认为值得一提(主要是因为它使用了我最喜欢的pandas函数......)

In [11]: scores = pd.get_dummies(df['page_id']).mul(df['score'], axis=0).where(x!=0, np.nan)

In [12]: scores
Out[12]: 
                      1   2   3
date                           
2001-05-23 19:50:14   3 NaN NaN
2001-05-25 11:53:55 NaN   4 NaN
2001-05-27 17:36:37   5 NaN NaN
2001-05-28 19:36:37   1 NaN NaN
2001-05-28 19:36:38 NaN NaN   6
2001-05-28 19:36:39 NaN NaN   9

In [13]: scores.ffill()
Out[13]: 
                     1   2   3
date                          
2001-05-23 19:50:14  3 NaN NaN
2001-05-25 11:53:55  3   4 NaN
2001-05-27 17:36:37  5   4 NaN
2001-05-28 19:36:37  1   4 NaN
2001-05-28 19:36:38  1   4   6
2001-05-28 19:36:39  1   4   9

In [14]: scores.ffill().sum(axis=1)
Out[14]: 
date
2001-05-23 19:50:14     3
2001-05-25 11:53:55     7
2001-05-27 17:36:37     9
2001-05-28 19:36:37     5
2001-05-28 19:36:38    11
2001-05-28 19:36:39    14

答案 2 :(得分:1)

这是你想要的吗?但我认为这是一个愚蠢的解决方案。

In [164]: df['result'] = [df[:i+1].groupby('page_id').last().sum()[0] for i in range(len(df))]

In [165]: df
Out[165]: 
                     page_id  score  result
date                                       
2001-05-23 19:50:14        1      3       3
2001-05-25 11:53:55        2      4       7
2001-05-27 17:36:37        1      5       9
2001-05-28 19:36:37        1      1       5

答案 3 :(得分:0)

这是我使用标准库放在一起的临时解决方案。我希望看到使用熊猫的优雅高效解决方案。

import csv
from collections import defaultdict

page_scores = defaultdict(lambda: 0)
date_scores = [] # [(date, score)]

def get_and_update_score_diff(page_id, new_score):
    diff = new_score - page_scores[page_id]
    page_scores[page_id] = new_score
    return diff

# Note: there are some duplicate dates and the file is sorted by date.
# Format: 2001-05-23T19:50:14Z, 2430, 7.632989
with open('sorted_df_changes.txt') as f:
    reader = csv.reader(f, delimiter='\t')

    first = reader.next()
    date_string, page_id, score = first[0], first[1], float(first[2])
    page_scores[page_id] = score
    date_scores.append((date_string, score))

    for date_string, page_id, score in reader:
        score = float(score)
        score_diff = get_and_update_score_diff(page_id, score)
        if date_scores[-1][0] == date_string:
            date_scores[-1] = (date_string, date_scores[-1][1] + score_diff)
        else:
            date_scores.append((date_string, date_scores[-1][1] + score_diff))