Pandas - 根据条件重复行

时间:2017-03-27 18:24:35

标签: python pandas group-by duplicates

如果行符合条件,我尝试创建重复行。在下表中,我创建了一个基于groupby的累积计数,然后是groupby的MAX的另一个计算。

df['PathID'] = df.groupby(DateCompleted).cumcount() + 1
df['MaxPathID'] = df.groupby(DateCompleted)['PathID'].transform(max)

Date Completed    PathID    MaxPathID
1/31/17           1         3
1/31/17           2         3
1/31/17           3         3
2/1/17            1         1
2/2/17            1         2
2/2/17            2         2

在这种情况下,我想只复制2/1/17的记录,因为该日期只有一个实例(即MaxPathID == 1)。

期望的输出:

Date Completed    PathID    MaxPathID
1/31/17           1         3
1/31/17           2         3
1/31/17           3         3
2/1/17            1         1
2/1/17            1         1
2/2/17            1         2
2/2/17            2         2

提前致谢!

3 个答案:

答案 0 :(得分:2)

我认为您需要在unique之后获取Date Completed行,然后将concat行添加到原始行:

df1 = df.loc[~df['Date Completed'].duplicated(keep=False), ['Date Completed']]
print (df1)
  Date Completed
3         2/1/17

df = pd.concat([df,df1], ignore_index=True).sort_values('Date Completed')
df['PathID'] = df.groupby('Date Completed').cumcount() + 1
df['MaxPathID'] = df.groupby('Date Completed')['PathID'].transform(max)
print (df)
  Date Completed  PathID  MaxPathID
0        1/31/17       1          3
1        1/31/17       2          3
2        1/31/17       3          3
3         2/1/17       1          2
6         2/1/17       2          2
4         2/2/17       1          2
5         2/2/17       2          2

编辑:

print (df)
  Date Completed  a  b
0        1/31/17  4  5
1        1/31/17  3  5
2        1/31/17  6  3
3         2/1/17  7  9
4         2/2/17  2  0
5         2/2/17  6  7

df1 = df[~df['Date Completed'].duplicated(keep=False)]
#alternative - boolean indexing by numpy array
#df1 = df[~df['Date Completed'].duplicated(keep=False).values]
print (df1)
  Date Completed  a  b
3         2/1/17  7  9

df = pd.concat([df,df1], ignore_index=True).sort_values('Date Completed')
print (df)
  Date Completed  a  b
0        1/31/17  4  5
1        1/31/17  3  5
2        1/31/17  6  3
3         2/1/17  7  9
6         2/1/17  7  9
4         2/2/17  2  0
5         2/2/17  6  7

答案 1 :(得分:1)

使用numpy + duplicated

的广告素材repeat方法
dc = df['Date Completed']
rg = np.arange(len(dc)).repeat((~dc.duplicated(keep=False).values) + 1)
df.iloc[rg]

  Date Completed  PathID  MaxPathID
0        1/31/17       1          3
1        1/31/17       2          3
2        1/31/17       3          3
3         2/1/17       1          1
3         2/1/17       1          1
4         2/2/17       1          2
5         2/2/17       2          2

答案 2 :(得分:0)

我知道这可能是一个不同的问题,但它确实与问题描述相符,所以人们会来自 goolge。我还没有研究过下面的优化或类似的东西,我相信有更好的方法,但有时只需要接受不完美;) 所以只是在这里发布以防有人面临类似的情况并想要快速完成。似乎工作得相当快。

假设我们有这样的数据框 (df):

enter image description here

我们想把它转换成这样的给定条件,即 field3 有多个条目,我们想像这样展开所有条目:

enter image description here

这是一种方法:

import pandas as pd
import numpy as np
from datetime import date,datetime

index = []
double_values = []


### get index and get list of values on which to expand per indexed row
for i,r in df.iterrows():
    index.append(i)
    ### below transform your column with multiple entries to a list based on delimetter
    double_values.append(str(r[2]).split(' '))

serieses = []

print('tot row to process', len(index))
count = 0
for i,dvs in zip(index,double_values):
    count+= 1
    if count % 1000 == 0:
        print('elem left', len(index)- count, datetime.now().strftime("%d/%m/%Y %H:%M:%S"))
    if len(dvs)>1:
        for dv in dvs:
            series = df.iloc[i]
            series.loc['field3'] = dv
            serieses.append(list(series))

#create dataframe out of expanded rows now appended to serieses list, creating a list of lists
df2 = pd.DataFrame.from_records(serieses,columns=df.columns)

### drop original rows with double entries, which have been expanded and appended already
indexes_to_drop = []
for i,dvs in zip(index,double_values):
    if len(dvs)>1:
        indexes_to_drop.append(i)

df.drop(df.index[indexes_to_drop],inplace=True)
len(df)


df = df.append(df2)