熊猫只删除连续的重复行,而忽略特定的列

时间:2020-07-20 09:36:35

标签: python pandas loops dataframe duplicates

我下面有一个数据框

df = pd.DataFrame({
    'ID': ['James', 'James', 'James', 'James',
           'Max', 'Max', 'Max', 'Max', 'Max',
           'Park', 'Park','Park', 'Park',
           'Tom', 'Tom', 'Tom', 'Tom'],
    'From_num': [578, 420, 420, 'Started', 298, 78, 36, 298, 'Started', 28, 28, 311, 'Started', 60, 520, 99, 'Started'],
    'To_num': [96, 578, 578, 420, 36, 298, 78, 36, 298, 112, 112, 28, 311, 150, 60, 520, 99],
    'Date': ['2020-05-12', '2020-02-02', '2020-02-01', '2019-06-18',
             '2019-08-26', '2019-06-20', '2019-01-30', '2018-10-23',
             '2018-08-29', '2020-05-21', '2020-05-20', '2019-11-22',
             '2019-04-12', '2019-10-16', '2019-08-26', '2018-12-11', '2018-10-09']})

是这样的:

       ID From_num  To_num        Date
0   James      578      96  2020-05-12
1   James      420     578  2020-02-02
2   James      420     578  2020-02-01 # Drop the this duplicated row (ignore date)
3   James  Started     420  2019-06-18
4     Max      298      36  2019-08-26
5     Max       78     298  2019-06-20
6     Max       36      78  2019-01-30
7     Max      298      36  2018-10-23
8     Max  Started     298  2018-08-29
9    Park       28     112  2020-05-21
10   Park       28     112  2020-05-20 # Drop this duplicate row (ignore date)
11   Park      311      28  2019-11-22
12   Park  Started     311  2019-04-12
13    Tom       60     150  2019-10-16
14    Tom      520      60  2019-08-26
15    Tom       99     520  2018-12-11
16    Tom  Started      99  2018-10-09

每个'ID'(名称)中都有一些连续的重复值(忽略日期值),例如对于James的第1行和第2行,From_num均为420,与第9行和第10行相同,我希望删除第二个重复的行并保留第一行。我写了循环条件,但是它非常冗长且缓慢,我认为可能有更简便的方法,所以如果您有想法请帮忙。万分感谢。预期的结果是这样的:

       ID  From_num  To_num    Date
0   James      578      96  2020-05-12
1   James      420     578  2020-02-02
2   James  Started     420  2019-06-18
3     Max      298      36  2019-08-26
4     Max       78     298  2019-06-20
5     Max       36      78  2019-01-30
6     Max      298      36  2018-10-23
7     Max  Started     298  2018-08-29
8    Park       28     112  2020-05-21
9    Park      311      28  2019-11-22
10   Park  Started     311  2019-04-12
11    Tom       60     150  2019-10-16
12    Tom      520      60  2019-08-26
13    Tom       99     520  2018-12-11
14    Tom  Started      99  2018-10-09

4 个答案:

答案 0 :(得分:1)

已经有点晚了,但这能满足您的要求吗?这样会丢弃连续的重复项,而忽略“日期”。

t = df[['ID', 'From_num', 'To_num']]     
df[(t.ne(t.shift())).any(axis=1)]

       ID From_num  To_num        Date
0   James      578      96  2020-05-12
1   James      420     578  2020-02-02
3   James  Started     420  2019-06-18
4     Max      298      36  2019-08-26
5     Max       78     298  2019-06-20
6     Max       36      78  2019-01-30
7     Max      298      36  2018-10-23
8     Max  Started     298  2018-08-29
9    Park       28     112  2020-05-21
11   Park      311      28  2019-11-22
12   Park  Started     311  2019-04-12
13    Tom       60     150  2019-10-16
14    Tom      520      60  2019-08-26
15    Tom       99     520  2018-12-11
16    Tom  Started      99  2018-10-09

这将删除索引值为2和10的行。

答案 1 :(得分:0)

在我看来,DataFrame.drop_duplicates正是这么做的,默认情况下,它会保留第一次出现的内容,而将其余的保留下来

unique_df = df.drop_duplicates(['ID', 'From_num', 'To_num'])

https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html

编辑

正如问题中提到的,只有连续的行应该被处理,为此,我建议先对其进行标记,然后在已标记的行的子集上运行drop_duplicates(我不确定这是否是最佳解决方案)

df['original_index'] = null
indices = df.index[1:]
for i in range(1, indices):
    # if current row equals the previous one
    if df.loc[indices[i - 1], 'ID'] == df.loc[indices[i], 'ID'] and df.loc[indices[i -1], 'From_num'] == df.loc[indices[i], 'From_num'] and df.loc[indices[i -1], 'To_num'] == df.loc[indices[i], 'To_num']:
        # get the original index if it has been already set on row index -1
        if df.loc[indices[i - 1], 'original_index'] not null:
            df.loc[indices[i], 'original_index'] = df.loc[indices[i - 1], 'original_index']
        else:
            # else set it to be current index for both rows
            df.loc[indices[i - 1], 'original_index'] = indices[i - 1]
            df.loc[indices[i], 'original_index'] = indices[i - 1]

现在我们将列“ original_index”添加到drop_duplicates

unique_df = df.drop_duplicates(['ID', 'From_num', 'To_num', 'original_index'])

答案 2 :(得分:0)

比较下面的行和上面的行,将布尔值求反即可得到结果:

cond1 = df.ID.eq(df.ID.shift())
cond2 = df.From_num.eq(df.From_num.shift())
cond = cond1 & cond2

df.loc[~cond].reset_index(drop=True)

替代:更长的路线:

(
    df.assign(
        temp=df.groupby(["ID", "From_num"]).From_num.transform("size"),
        check=lambda x: (x.From_num.eq(x.From_num.shift())) &
                        (x.temp.eq(x.temp.shift())),
    )
    .query("check == 0")
    .drop(["temp", "check"], axis=1)
)

答案 3 :(得分:-1)

df.groupby(['ID', 'From_num', 'To_num']).first().reset_index()

编辑-这将删除重复项,即使它们不是连续的。例如原始df中的第4和第7行。

更新

cols=['ID', 'From_num', 'To_num']
df.loc[(df[cols].shift() != df[cols]).any(axis=1)].shape