基于多列分组依据大小的Dask筛选器数据框

时间:2019-02-14 02:52:06

标签: python pandas dask

目标=通过一个淡淡的数据帧进行多列分组,并过滤​​出少于3行的分组。

基于此信息: Filtering grouped df in Dask

我能够计算每个groupby对象的大小,但是我不知道如何将其从mutli-column groupby映射回我的数据框。我尝试了以下多种变体,但无济于事:

a = input_df.groupby(["FeatureID", "region"])["Target"].size()
s = input_df[["FeatureID", "region"]].map(a)

它对于单个列groupby非常有用。

解决方案

由于@jezrael,我得以提出以下解决方案:

a = input_df.groupby(["FeatureID", "region"])["Target"].nunique().to_frame("feature_div")
input_df = input_df.join(a, on=["FeatureID", "region"])

# filter out features below diversity threshold
diversified = input_df[input_df.feature_div >= diversity_threshold]

1 个答案:

答案 0 :(得分:1)

您需要jointo_frame

a = input_df.groupby(["FeatureID", "region"])["Target"].size().to_frame('New')
input_df = input_df.join(a, on=["FeatureID", "region"])

示例

import pandas as pd
from dask import dataframe as dd 

input_df = pd.DataFrame({
         'FeatureID':[4,5,4,5,5,4],
         'region':list('aaabbb'),
         'Target':[7,8,9,4,2,3],
})

print (input_df)
   FeatureID region  Target
0          4      a       7
1          5      a       8
2          4      a       9
3          5      b       4
4          5      b       2
5          4      b       3

sd = dd.from_pandas(input_df, npartitions=3)
print (sd)
              FeatureID  region Target
npartitions=3                         
0                 int64  object  int64
2                   ...     ...    ...
4                   ...     ...    ...
5                   ...     ...    ...
Dask Name: from_pandas, 3 tasks

a = sd.groupby(["FeatureID", "region"])["Target"].size().to_frame('New')
out = sd.join(a, on=["FeatureID", "region"]).compute()
print (out)
   FeatureID region  Target  New
0          4      a       7    2
1          5      a       8    1
2          4      a       9    2
3          5      b       4    2
4          5      b       2    2
5          4      b       3    1