与Pandas的这个类似问题相关:filtering grouped df in pandas
动作 基于应用于groupby列以外的其他列的表达式来消除组。
问题 未对分组的数据帧实施过滤。
试过 Groupby并应用于删除某些组,这会返回一个索引错误,因为apply函数应该总是返回一些东西?
In [16]:
def filter_empty(df):
if not df.label.values.all(4):
return df
df_nonempty = df_norm.groupby('hash').apply(filter_empty, meta=meta)
In [17]:
len(df_nonempty.hash.unique())
...
<ipython-input-16-6da6d9b6c069> in filter_empty()
1 def filter_empty(df):
----> 2 if not df.label.values.all(4):
3 return df
4
5 df_nonempty = df_norm.groupby('hash').apply(filter_empty, meta=meta)
/opt/conda/lib/python3.5/site-packages/numpy/core/_methods.py in _all()
39
40 def _all(a, axis=None, dtype=None, out=None, keepdims=False):
---> 41 return umr_all(a, axis, dtype, out, keepdims)
42
43 def _count_reduce_items(arr, axis):
ValueError: 'axis' entry is out of bounds
问题
是否有另一种方法来实现Dask相当于Pandas grouped.filter(lambda x: len(x) > 1)
?或者groupby应用只是错误地实现了?
示例
import numpy as np
import pandas as pd
import dask.dataframe as dd
In [3]:
df = pd.DataFrame({'A':list('aacaaa'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbc')})
df = dd.from_pandas(df, npartitions=1)
In [8]:
df.A.unique().compute()
Out[8]:
0 a
1 c
Name: A, dtype: object
In [6]:
def filter_4(df):
if not df.B.values.all(4):
return df
df_notalla = df.groupby('A').apply(filter_4, meta=df)
In [10]:
df_notall4.A.unique().compute()
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-894a491faa57> in <module>()
----> 1 df_notalla.A.unique().compute()
...
<ipython-input-6-ef10326ae42a> in filter_4(df)
1 def filter_4(df):
----> 2 if not df.B.values.all(4):
3 return df
4
5 df_notalla = df.groupby('A').apply(filter_4, meta=df)
/opt/conda/lib/python3.5/site-packages/numpy/core/_methods.py in _all(a, axis, dtype, out, keepdims)
39
40 def _all(a, axis=None, dtype=None, out=None, keepdims=False):
---> 41 return umr_all(a, axis, dtype, out, keepdims)
42
43 def _count_reduce_items(arr, axis):
ValueError: 'axis' entry is out of bounds
答案 0 :(得分:2)
我认为你可以先groupby
+ size
,然后Series
dask
(就像map
一样,但df = pd.DataFrame({'A':list('aacaaa'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbc')})
print (df)
A B C D E F
0 a 4 7 1 5 a
1 a 5 8 3 3 a
2 c 4 9 5 6 a
3 a 5 4 7 9 b
4 a 5 2 1 2 b
5 a 4 3 0 4 c
中没有实现}})和transform
的最后一次过滤:
a = df.groupby('F')['A'].size()
print (a)
F
a 3
b 2
c 1
Name: A, dtype: int64
s = df['F'].map(a)
print (s)
0 3
1 3
2 3
3 2
4 2
5 1
Name: F, dtype: int64
df = df[s > 1]
print (df)
A B C D E F
0 a 4 7 1 5 a
1 a 5 8 3 3 a
2 c 4 9 5 6 a
3 a 5 4 7 9 b
4 a 5 2 1 2 b
groupby
编辑:
我认为这里没有必要df_notall4 = df[df.C != 4].drop_duplicates(subset=['A','D'])['D'].compute()
:
def filter_4(x):
return x[x.C != 4]
df_notall4 = df.groupby('A').apply(filter_4, meta=df).D.unique().compute()
print (df_notall4)
0 1
1 3
2 0
3 5
Name: D, dtype: int64
但如果真的需要它:
console.log('Trying moment.js');
console.log(moment);
console.log('Moment-testing success');
答案 1 :(得分:1)
感谢@jezrael,我审核了我的实现并创建了以下解决方案(请参阅我提供的示例)。
df_notall4 = []
for d in list(df[df.C != 4].D.unique().compute()):
df_notall4.append(df.groupby('D').get_group(d))
df_notall4 = dd.concat(df_notall4, interleave_partitions=True)
结果是
In [8]:
df_notall4.D.unique().compute()
Out[8]:
0 1
1 3
2 5
3 0
Name: D, dtype: object