系列的链接过滤器

时间:2016-08-17 18:58:11

标签: python python-3.x pandas dataframe

方便chain filters on a DataFrame using query

class UserProfile(models.Model):
    #bunch of fields

class Order(models.Model):
    STATES = ('Successful','In_progress', 'Cancelled', 'Intended')
    state = models.CharField(max_length=11,choices=STATES,default='INTENDED')
    user = models.ForeignKey(UserProfile)

如果我需要对# quoting from the SO answer above df = pd.DataFrame( np.random.randn(30,3), columns = ['a','b','c']) df_filtered = df.query('a>0').query('0<b<2') 执行相同操作

,该怎么办?
Series

df = pd.DataFrame({'a': [0, 0, 1, 1, 2, 2], 'b': [1, 2, 3, 4, 5, 6]}) df.groupby('a').b.sum().query('? > 3').query('? % 3 == 1') 不存在(出于好的理由,大多数查询语法都允许访问多个列。)

1 个答案:

答案 0 :(得分:3)

您可以使用to_frame()方法:

In [10]: df.groupby('a').b.sum().to_frame('v').query('v > 3').query('v % 3 == 1')
Out[10]:
   v
a
1  7

如果你需要结果作为系列:

In [12]: df.groupby('a').b.sum().to_frame('v').query('v > 3').query('v % 3 == 1').v
Out[12]:
a
1    7
Name: v, dtype: int64
  

to_frame()是否涉及复制系列?

它涉及DataFrame构造函数的调用:

https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L1140

df = self._constructor_expanddim({name: self})

https://github.com/pydata/pandas/blob/master/pandas/core/series.py#L265

def _constructor_expanddim(self):
    from pandas.core.frame import DataFrame
    return DataFrame

性能影响(针对600K行DF测试):

In [66]: %timeit df.groupby('a').b.sum()
10 loops, best of 3: 46.2 ms per loop

In [67]: %timeit df.groupby('a').b.sum().to_frame('v')
10 loops, best of 3: 49.7 ms per loop

In [68]: 49.7 / 46.2
Out[68]: 1.0757575757575757

性能影响(针对6M行DF测试):

In [69]: df = pd.concat([df] * 10, ignore_index=True)

In [70]: df.shape
Out[70]: (6000000, 2)

In [71]: %timeit df.groupby('a').b.sum()
1 loop, best of 3: 474 ms per loop

In [72]: %timeit df.groupby('a').b.sum().to_frame('v')
1 loop, best of 3: 464 ms per loop

性能影响(针对60M行DF测试):

In [73]: df = pd.concat([df] * 10, ignore_index=True)

In [74]: df.shape
Out[74]: (60000000, 2)

In [75]: %timeit df.groupby('a').b.sum()
1 loop, best of 3: 4.28 s per loop

In [76]: %timeit df.groupby('a').b.sum().to_frame('v')
1 loop, best of 3: 4.3 s per loop

In [77]: 4.3 / 4.28
Out[77]: 1.0046728971962615

结论:性能影响似乎并不那么大......