删除后熊猫重复索引

时间:2020-08-01 09:32:05

标签: python-3.x pandas duplicates pivot

我得到:“ ValueError:索引包含重复的条目,无法重塑”

我正在使用的数据非常庞大,我无法提供示例数据,也无法使用较小的数据集复制错误。我试图用虚拟数据生成重复项以复制我的原始帧,但由于某些神秘的原因,该代码仅适用于虚拟数据,不适用于我的真实数据。这就是我所知道的形状。


df.shape

>> (6820, 26) 

df.duplicated()

>> 0       False
>> 1       False
>> 2       False
>>        ...  
>> 6818    False
>> 6819    False
>> Length: 6820, dtype: bool

现在我想找出哪些行是重复的。

df[df.duplicated(keep=False)]

>> 0 rows × 26 columns

只是为了确保我删除所有重复项,而只保留第一个重复项:

df = df.drop_duplicates(keep='first')

这是我遇到ValueError的时候:

df2 = df.melt('Release')\
        .assign(variable = lambda x: x.variable.map({'Created Date':1,'Finished Date':-1}))\
        .pivot('value','Release','variable').fillna(0)\
        .rename(columns = lambda c: f'{c} netmov' )


---> 33         .pivot('value','Release','variable').fillna(0)\
ValueError: Index contains duplicate entries, cannot reshape

通过进一步调查,似乎不是重复的行而是索引。我尝试使用df.reset_index()重置索引,但它会引发相同的ValueError。

编辑:

我可以提供应该复制我正在使用的框架的虚拟数据(只是几个不需要的列)

df = pd.DataFrame({'name': ["Peter", "Anna", "Anna", "Peter", "Simon", "Johan", "Nils", "Oskar", "Peter"]
                  , 'Deposits': ["2019-03-07", "2019-03-08", "2019-03-12", "2019-03-12", "2019-03-14", "2019-03-07", "2019-03-08", "2016-03-07", "2019-03-07"]
                  , 'Withdrawals': ["2019-03-11", "2019-03-19", "2019-05-22", "2019-10-31", "2019-04-05", "2019-03-11", "NaN", "2017-03-06", "2019-03-11"]})

df.duplicated()

0    False
1    False
2    False
.....
7    False
8     True
dtype: bool

df = df.drop_duplicates(keep='first')
df2 = df.melt('name')\
        .assign(variable = lambda x: x.variable.map({'Deposits':1,'Withdrawals':-1}))\
        .pivot('value','name','variable').fillna(0)\
        .rename(columns = lambda c: f'{c} netmov' )

df2 = pd.concat([df2,df2.cumsum().rename(columns = lambda c: c.split()[0] + ' balance')], axis = 1)\
        .sort_index(axis=1)


print(df2.head())

name        Anna balance  Anna netmov  Johan balance  Johan netmov  \
value                                                                
2016-03-07           0.0          0.0            0.0           0.0   
2017-03-06           0.0          0.0            0.0           0.0   
2019-03-07           0.0          0.0            1.0           1.0   
2019-03-08           1.0          1.0            1.0           0.0   
2019-03-11           1.0          0.0            0.0          -1.0

即使DataFrame中存在重复项,此操作也将平稳运行。

最好不要重复,因为“ Anna”一天可能有4次存款和4次提款,所以我想计算所有这些。

我正在使用的数据框:


df = df.drop_duplicates().reset_index(drop=True)
df = df.drop(['id'], axis=1)
df

Output:

        name    Deposits     Withdrawals
0       Anna    2020-07-31   NaN
1       Peter   2020-07-30   NaN
2       Simon   2020-07-30   NaN
3       Simon   2020-07-29   NaN
4       Simon   2020-07-29   NaN
... ... ... ...
6154    Peter   2014-01-22  2014-02-03
6155    Peter   2014-01-22  2014-01-29
6156    Peter   2014-01-22  2014-01-24
6157    Peter   2014-01-21  2014-01-29
6158    Peter   2014-01-15  2014-02-03
6159 rows × 3 columns

更新:向社区大喊帮助我解决此问题。

这解决了问题:

df.Deposits = pd.to_datetime(df.Deposits)
df.Withdrawals = pd.to_datetime(df.Withdrawals)

df2 = (
    df.melt('name') 
    .assign(variable = lambda x: x.variable.map({'Deposits':1,'Withdrawals':-1}))
    .dropna(subset=['value']) # you need this for cases like Nils's Withdrawal
    )
df2 = df2.groupby(['value', 'name']).sum().unstack(fill_value=0).droplevel(0, axis=1)


df2 = (
    pd.concat([df2, df2.cumsum()], keys=['netmov', 'balance'], axis=1)
     notice how concat has the functionality you want for naming columns
     and is a better idea to have netmov/balance in a separate level
     in case you want to groupby or .loc later on
    .reorder_levels([1, 0], axis=1).sort_index(axis=1)
    )

尽管偶然发现下一个问题,但与此无关。当将此DataFrame转换为json时,出于某种原因,它将日期转换为另一种格式。

data = df2.to_json()
print(data)

{
    "Peter":
    {
        "1389744000000": 0,
        "1390262400000": 0,
        "1390348800000": 0,
        "1390521600000": 0,
    .....
    .....
    }
}

总是有其他东西,呵呵。虽然有帮助,但我几乎能触及球门线。

1 个答案:

答案 0 :(得分:2)

当一个名称在完全相同的存款/取款日期中出现多次变动(因此重复出现)时,似乎会出现问题。数据框.pivot方法无法处理重复的索引,它不是为此设计的。为了您的分析目的,.pivot_table可以解决问题,主要区别在于,该函数可以应用聚合函数来处理重复索引(在这种情况下为总和)。 https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html

我个人倾向于将.groupby用于此类问题,因为它不仅提供了按df中列的任意组合进行分组的功能,而且还可以包括外生系列,计算,索引或索引级别自我或其他,面具等。

所以我的代码是:

df.Deposits = pd.to_datetime(df.Deposits)
df.Withdrawals = pd.to_datetime(df.Withdrawals) # this parsing probably happens in read_csv
df2 = (
    df.melt('name') 
    .assign(variable = lambda x: x.variable.map({'Deposits':1, 'Withdrawals':-1}))
    # use lambda if you must
    # replace on 'variable' after creating df2 would also work
    # and is probably faster for larger dfs
    .dropna(subset=['value']) # you need this for cases like Nils's Withdrawal
    )
df2 = df2.groupby(['value', 'name']).sum().unstack(fill_value=0).droplevel(0, axis=1)
df2 = (
    pd.concat([df2, df2.cumsum()], keys=['netmov', 'balance'], axis=1)
    # notice how concat has the functionality you want for naming columns
    # and is a better idea to have netmov/balance in a separate level
    # in case you want to groupby or .loc later on
    .reorder_levels([1, 0], axis=1).sort_index(axis=1)
    )

输出

name          Anna          Johan           Nils  ...  Oskar   Peter          Simon
           balance netmov balance netmov balance  ... netmov balance netmov balance netmov
value                                             ...
2016-03-07       0      0       0      0       0  ...      1       0      0       0      0
2017-03-06       0      0       0      0       0  ...     -1       0      0       0      0
2019-03-07       0      0       1      1       0  ...      0       2      2       0      0
2019-03-08       1      1       1      0       1  ...      0       2      0       0      0
2019-03-11       1      0       0     -1       1  ...      0       0     -2       0      0
2019-03-12       2      1       0      0       1  ...      0       1      1       0      0
2019-03-14       2      0       0      0       1  ...      0       1      0       1      1
2019-03-19       1     -1       0      0       1  ...      0       1      0       1      0
2019-04-05       1      0       0      0       1  ...      0       1      0       0     -1
2019-05-22       0     -1       0      0       1  ...      0       1      0       0      0
2019-10-31       0      0       0      0       1  ...      0       0     -1       0      0