对于每对src
和dest
机场城市,我想返回列a
的百分位数,并给出b
列的值。
我可以这样手动执行此操作:
示例df只有2对src / dest(我的实际df中有数千个):
dt src dest a b
0 2016-01-01 YYZ SFO 548.12 279.28
1 2016-01-01 DFW PDX 111.35 -65.50
2 2016-02-01 YYZ SFO 64.84 342.35
3 2016-02-01 DFW PDX 63.81 61.64
4 2016-03-01 YYZ SFO 614.29 262.83
{'a': {0: 548.12,
1: 111.34999999999999,
2: 64.840000000000003,
3: 63.810000000000002,
4: 614.28999999999996,
5: -207.49000000000001,
6: 151.31999999999999,
7: -56.43,
8: 611.37,
9: -296.62,
10: 6417.5699999999997,
11: -376.25999999999999,
12: 465.12,
13: -821.73000000000002,
14: 1270.6700000000001,
15: -1410.0899999999999,
16: 1312.6600000000001,
17: -326.25999999999999,
18: 1683.3699999999999,
19: -24.440000000000001,
20: 583.60000000000002,
21: -5.2400000000000002,
22: 1122.74,
23: 195.21000000000001,
24: 97.040000000000006,
25: 133.94},
'b': {0: 279.27999999999997,
1: -65.5,
2: 342.35000000000002,
3: 61.640000000000001,
4: 262.82999999999998,
5: 115.89,
6: 268.63999999999999,
7: 2.3500000000000001,
8: 91.849999999999994,
9: 62.119999999999997,
10: 778.33000000000004,
11: -142.78,
12: 1675.53,
13: -214.36000000000001,
14: 983.80999999999995,
15: -207.62,
16: 632.13999999999999,
17: -132.53,
18: 422.36000000000001,
19: 13.470000000000001,
20: 642.73000000000002,
21: -144.59999999999999,
22: 213.15000000000001,
23: -50.200000000000003,
24: 338.27999999999997,
25: -129.69},
'dest': {0: 'SFO',
1: 'PDX',
2: 'SFO',
3: 'PDX',
4: 'SFO',
5: 'PDX',
6: 'SFO',
7: 'PDX',
8: 'SFO',
9: 'PDX',
10: 'SFO',
11: 'PDX',
12: 'SFO',
13: 'PDX',
14: 'SFO',
15: 'PDX',
16: 'SFO',
17: 'PDX',
18: 'SFO',
19: 'PDX',
20: 'SFO',
21: 'PDX',
22: 'SFO',
23: 'PDX',
24: 'SFO',
25: 'PDX'},
'dt': {0: Timestamp('2016-01-01 00:00:00'),
1: Timestamp('2016-01-01 00:00:00'),
2: Timestamp('2016-02-01 00:00:00'),
3: Timestamp('2016-02-01 00:00:00'),
4: Timestamp('2016-03-01 00:00:00'),
5: Timestamp('2016-03-01 00:00:00'),
6: Timestamp('2016-04-01 00:00:00'),
7: Timestamp('2016-04-01 00:00:00'),
8: Timestamp('2016-05-01 00:00:00'),
9: Timestamp('2016-05-01 00:00:00'),
10: Timestamp('2016-06-01 00:00:00'),
11: Timestamp('2016-06-01 00:00:00'),
12: Timestamp('2016-07-01 00:00:00'),
13: Timestamp('2016-07-01 00:00:00'),
14: Timestamp('2016-08-01 00:00:00'),
15: Timestamp('2016-08-01 00:00:00'),
16: Timestamp('2016-09-01 00:00:00'),
17: Timestamp('2016-09-01 00:00:00'),
18: Timestamp('2016-10-01 00:00:00'),
19: Timestamp('2016-10-01 00:00:00'),
20: Timestamp('2016-11-01 00:00:00'),
21: Timestamp('2016-11-01 00:00:00'),
22: Timestamp('2016-12-01 00:00:00'),
23: Timestamp('2016-12-01 00:00:00'),
24: Timestamp('2017-01-01 00:00:00'),
25: Timestamp('2017-01-01 00:00:00')},
'src': {0: 'YYZ',
1: 'DFW',
2: 'YYZ',
3: 'DFW',
4: 'YYZ',
5: 'DFW',
6: 'YYZ',
7: 'DFW',
8: 'YYZ',
9: 'DFW',
10: 'YYZ',
11: 'DFW',
12: 'YYZ',
13: 'DFW',
14: 'YYZ',
15: 'DFW',
16: 'YYZ',
17: 'DFW',
18: 'YYZ',
19: 'DFW',
20: 'YYZ',
21: 'DFW',
22: 'YYZ',
23: 'DFW',
24: 'YYZ',
25: 'DFW'}}
我想要每组src
和dest
对的百分位数。因此,每对只应有1百分位数值。我只想为b
执行date = 2017-01-01
的百分位数,其中src
对于每对dest
和a
对,每对i.e. src=YYZ and dest=SFT
。有意义吗?
我可以手动执行此操作,例如针对特定对from scipy import stats
import datetime as dt
import pandas as pd
p0 = dt.datetime(2017,1,1)
# lets slice df for src=YYZ and dest = SFO
x = df[(df.src =='YYZ') &
(df.dest =='SFO') &
(df.dt ==p0)].b.values[0]
# given B, what percentile does it fall in for the entire column A for YYZ, SFO
stats.percentileofscore(df['a'],x)
61.53846153846154
:
vectorize
在上述情况下,我手动对YYZ和SFO对进行了此操作。但是,我的df中有成千上万对。
我如何pandas features
使用groupby
而不是遍历每一对?
必须有一种方法可以使用apply
并在函数上使用 src dest percentile
0 YYZ SFO 61.54
1 DFW PDX 23.07
2 XXX YYY blahblah1
3 AAA BBB blahblah2
...
吗?
我想要的df应该类似于:
def b_percentile_a(df,x,y,b):
z = df[(df['src'] == x ) & (df['dest'] == y)].a
r = stats.percentileofscore(z,b)
return r
b_vector_df = df[df.dt == p0]
b_vector_df['p0_a_percentile_b'] = \
b_vector_df.apply(lambda x: b_percentile_a(df,x.src,x.dest,x.b), axis=1)
更新
我实施了以下内容:
5.16
100
对需要55,000
秒。我有~50
对。所以这需要36
分钟。我需要运行此several days
次,以便花费body {margin:0}
的运行时间。
必须有更快的方法吗?
答案 0 :(得分:6)
获得了令人难以置信的节省时间!
<强>输出:强>
a_list的大小:49998随机唯一值
百分位_1(你给定的df - scipy)
计算百分位数104次 - 在0:00:07.777022中的104条记录
percentile_9(使用给定df的PercentileOfScore(rank_searchsorted_list)类
计算百分位数104次 - 在0:00:00.000609中的104条记录
_ dt src dest a b pct scipy _
0: 2016-01-01 YYZ SFO 54812 279.28 74.81299251970079 74.8129925197
1: 2016-01-01 DFW PDX 111.35 -65.5 24.66698667946718 24.6669866795
2: 2016-02-01 YYZ SFO 64.84 342.35 76.4810592423697 76.4810592424
3: 2016-02-01 DFW PDX 63.81 61.64 63.84655386215449 63.8465538622
...
24: 2017-01-01 YYZ SFO 97.04 338.28 76.3570542821712 76.3570542822
25: 2017-01-01 DFW PDX 133.94 -129.69 21.4668586743469 21.4668586743
查看scipy.percentileofscore
的实施情况我发现整个list( a )
在percentileofscore
的每次通话中复制,插入,排序,搜索 - 。
我实施了自己的class PercentileOfScore
import numpy as np
class PercentileOfScore(object):
def __init__(self, aList):
self.a = np.array( aList )
self.a.sort()
self.n = float(len(self.a))
self.pct = self.__rank_searchsorted_list
# end def __init__
def __rank_searchsorted_list(self, score_list):
adx = np.searchsorted(self.a, score_list, side='right')
pct = []
for idx in adx:
# Python 2.x needs explicit type casting float(int)
pct.append( (float(idx) / self.n) * 100.0 )
return pct
# end def _rank_searchsorted_list
# end class PercentileOfScore
我不认为def percentile_7
符合您的需求。 dt
将不予考虑。
PctOS = None
def percentile_7(df_flat):
global PctOS
result = {}
for k in df_flat.pair_dict.keys():
# df_flat.pair_dict = { 'src.dst': [b,b,...bn] }
result[k] = PctOS.pct( df_flat.pair_dict[k] )
return result
# end def percentile_7
在您的手动示例中,您使用整个df.a
。在此示例中dt_flat.a_list
,但我不确定这是否是您想要的?
from PercentileData import DF_flat
def main():
# DF_flat.data = {'dt.src.dest':[a,b]}
df_flat = DF_flat()
# Instantiate Global PctOS
global PctOS
# df_flat.a_list = [a,a,...an]
PctOS = PercentileOfScore(df_flat.a_list)
result = percentile_7(df_flat)
# result = dict{'src.dst':[pct,pct...pctn]}
使用Python测试:3.4.2和2.7.9 - numpy:1.8.2
答案 1 :(得分:4)
假设您有一对配对列表,请说pairs = [[a,b], [c,d], ...]
并定义df,
r = stats.percentileofscore(z,b)
return r
for pair in pairs:
# get the corresponding rows for each pair
bvalues = df.loc[(df['src']==pair[0])&(df['dest']==pair[1])][['a', 'b']]
# apply the percentileofscore map
b_vector_df['p0_a_percentile_b'] = bvalues.b.apply(lambda x: stats.percentileofscore(bvalues.a, x))
我不完全确定目标是什么。我的理解是,您为每个 src , dest 对读取 b 值,并查找相应的 a 值,然后计算 a 值的百分位数。如果这有帮助,请告诉我:))
编辑:假设您只使用五列date, src, dest, a, and b
,您可以考虑使用仅包含这5列的数据框副本。它减少了每个提取步骤所需的工作量。我觉得仅使用您需要的大量数据更有效。
Selecting rows from a Dataframe based on values in multiple columns in pandas是一个可能与您相关的讨论。
答案 2 :(得分:4)
您可以一次分组多个列。
# takes the b value at a specified point
# and returns its percentile of the full a array
def b_pct(df, p0):
bval = df.b[df.dt==p0]
assert bval.size == 1, 'can have only one entry per timestamp'
bval = bval.values[0]
# compute the percentile
return (df.a < bval).sum() / len(df.a)
# splits the full dataframe up into groups by (src, dest) trajectory and
# returns a dataframe of the form src, dest, percentile
def trajectory_b_percentile(df, p0):
percentile_df = pd.DataFrame([pd.Series([s, d, b_pct(g, p0)],
index=['src', 'dest', 'percentile'])
for ((s, d), g) in df.groupby(('src', 'dest'))])
return percentile_df
为了比较,上面的代码吐出
dt src dest a b p0_a_percentile_b
24 2017-01-01 YYZ SFO 97.04 338.28 23.076923
25 2017-01-01 DFW PDX 133.94 -129.69 46.153846
而``trajectory_b_percentile&#39;返回
src dest percentile
0 DFW PDX 46.1538
1 YYZ SFO 23.0769
我没有看到25个条目的任何加速,但它应该是更明显的。
答案 3 :(得分:1)
似乎通过将所有内容转换为numpy数组并将百分位数构建为numpy数组来获得另一个相当大的加速:
# Get airport strings as indices
_, ir = np.unique(df['src'].values, return_inverse=True)
_, ic = np.unique(df['dest'].values, return_inverse=True)
# Get a and b columns
a = df['a'].values
b = df['b'].values
# Compute percentile scores in a numpy array
prc = np.zeros(a.shape)
for i in range(0, a.shape[0]):
prc[i] = stats.percentileofscore(a[np.logical_and(ir==ir[i], ic==ic[i])], b[i])
在包含24000个条目的数据框上(参见下面的结构),运行%%timeit
给出
1 loop, best of 3: 2.17 s per loop
然而,原始版本
df['p0_a_percentile_b'] = \
df.apply(lambda x: b_percentile_a(df,x.src,x.dest,x.b), axis=1)
产量
1 loop, best of 3: 1min 2s per loop
慢得多。我还检查了两个片段通过运行np.all(prc == df.p0_a_percentile_b.values)
产生相同的输出,产生True
。
我构建了一个数据框来测试这个,在这里我分享了重现性的过程。我使用100个独特的机场名称获取了2000对机场,然后每对生成12个数据帧行,然后生成随机的a和b列。
import pandas as pd
import numpy as np
import scipy.stats as stats
import numpy.matlib as mat
# Construct dataframe
T=12
N_airports = 100
N_entries = 2000
airports = np.arange(0, N_airports).astype('string')
src = mat.repmat(airports[np.random.randint(N_airports, size=(N_entries, ))], 1, T)
dest = mat.repmat(airports[np.random.randint(N_airports, size=(N_entries, ))], 1, T)
a = np.random.uniform(size=N_entries*T)
b = np.random.uniform(size=N_entries*T)
df = pd.DataFrame(np.vstack((src, dest, a, b)).T, columns=['src', 'dest', 'a', 'b'])
答案 4 :(得分:1)
如果这代表您的数据模型,请验证并发表评论!
dt src dest a b 0: 2016-01-01 DFW PDX 111.35 -65.5
dt src dest a b 0: 2016-01-01 DFW PDX 111.35 -65.5 1: 2016-02-01 DFW PDX 63.81 61.64 2: 2016-03-01 DFW PDX -207.49 115.89 3: 2016-04-01 DFW PDX -56.43 2.35 4: 2016-05-01 DFW PDX -296.62 62.12 5: 2016-06-01 DFW PDX -376.26 -142.78 6: 2016-07-01 DFW PDX -821.73 -214.36 7: 2016-08-01 DFW PDX -1410.09 -207.62 8: 2016-09-01 DFW PDX -326.26 -132.53 9: 2016-10-01 DFW PDX -24.44 13.47 10:2016-11-01 DFW PDX -5.24 -144.6 11:2016-12-01 DFW PDX 195.21 -50.2 12:2017-01-01 DFW PDX 133.94 -129.69
dt src dest a b 0: 2016-01-01 DFW PDX 111.35 -65.5
伪代码: stats.percentileofscore(SET(DFW PDX)[a0 ... a12], - 65.5)= 46.15
示例:计算 SET 的百分位数(DFW PDX)
伪代码
用于SET(DFW PDX)中的记录:
stats.percentileofscore(SET(DFW PDX)[a0 ... a12],record.b)
输出:pct0 ... pct12使用rank_searchsorted_list不需要'来记录':
rank_searchsorted_list(SET(DFW PDX)[a0 ... a12],SET(DFW PDX)[b0 ... b12])
输出:[pct0 ... pct12]
这是 SET (DFW PDX)矢量化
OBJECT = {'DFW PDX':[
['2016-01-01', '2016-02-01', '2016-03-01', '2016-04-01', '2016-05-01', '2016-06-01', '2016-07-01', '2016-08-01', '2016-09-01', '2016-10-01', '2016-11-01', '2016-12-01', '2017-01-01']
[111.35, 63.81, -207.49, -56.43, -296.62, -376.26, -821.73, -1410.09, -326.26, -24.44, -5.24, 195.21, 133.94]
[-65.5, 61.64, 115.89, 2.35, 62.12, -142.78, -214.36, -207.62, -132.53, 13.47, -144.6, -50.2, -129.69]
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
]}
示例:计算 OBJECT (DFW PDX)的百分位数
使用 stats.percentileofscore :
a = 1; b = 2
for b_value in OBJECT['DFW PDX'][b]:
stats.percentileofscore( OBJECT['DFW PDX'][a], b_value)
Output: pct0...pct12
使用 rank_searchsorted_list 对于中的b_value不需要':
a = 1; b = 2; pct = 3
vector = OBJECT['DFW PDX']
vector[pct] = rank_searchsorted_list( vector[a], vector[b] )
输出:
dt src dest a b pct scipy 0: 2016-01-01 DFW PDX 111.35 -65.5 46.15 46.15 1: 2016-02-01 DFW PDX 63.81 61.64 69.23 69.23 2: 2016-03-01 DFW PDX -207.49 115.89 84.61 84.61 3: 2016-04-01 DFW PDX -56.43 2.35 69.23 69.23 4: 2016-05-01 DFW PDX -296.62 62.12 69.23 69.23 5: 2016-06-01 DFW PDX -376.26 -142.78 46.15 46.15 6: 2016-07-01 DFW PDX -821.73 -214.36 38.46 38.46 7: 2016-08-01 DFW PDX -1410.09 -207.62 38.46 38.46 8: 2016-09-01 DFW PDX -326.26 -132.53 46.15 46.15 9: 2016-10-01 DFW PDX -24.44 13.47 69.23 69.23 10:2016-11-01 DFW PDX -5.24 -144.6 46.15 46.15 11:2016-12-01 DFW PDX 195.21 -50.2 53.84 53.84 12:2017-01-01 DFW PDX 133.94 -129.69 46.15 46.15
请验证并确认计算出的百分位数!