我有一个这样的数据框:
source target weight
1 2 5
2 1 5
1 2 5
1 2 7
3 1 6
1 1 6
1 3 6
我的目标是删除重复的行,但是源列和目标列的顺序并不重要。实际上,两列的顺序并不重要,应将其删除。在这种情况下,预期结果将是
source target weight
1 2 5
1 2 7
3 1 6
1 1 6
有没有没有循环的方法吗?
答案 0 :(得分:3)
使用frozenset
和duplicated
df[~df[['source', 'target']].apply(frozenset, 1).duplicated()]
source target weight
0 1 2 5
3 3 1 6
4 1 1 6
如果您要考虑无序source
/ target
和weight
df[~df[['weight']].assign(A=df[['source', 'target']].apply(frozenset, 1)).duplicated()]
source target weight
0 1 2 5
3 1 2 7
4 3 1 6
5 1 1 6
但是,要使用可读性强的代码来明确表示。
# Create series where values are frozensets and therefore hashable.
# With hashable things, we can determine duplicity.
# Note that I also set the index and name to set up for a convenient `join`
s = pd.Series(list(map(frozenset, zip(df.source, df.target))), df.index, name='mixed')
# Use `drop` to focus on just those columns leaving whatever else is there.
# This is more general and accommodates more than just a `weight` column.
mask = df.drop(['source', 'target'], axis=1).join(s).duplicated()
df[~mask]
source target weight
0 1 2 5
3 1 2 7
4 3 1 6
5 1 1 6
答案 1 :(得分:0)
应该很容易。
data = [[1,2,5],
[2,1,5],
[1,2,5],
[3,1,6],
[1,1,6],
[1,3,6],
]
df = pd.DataFrame(data,columns=['source','target','weight'])
您可以使用drop_duplicates
df = df.drop_duplicates(keep=False)
print(df)
将导致:
source target weight
1 2 1 5
3 3 1 6
4 1 1 6
5 1 3 6
因为您要处理无序的源/目标问题。
def pair(row):
sorted_pair = sorted([row['source'],row['target']])
row['source'] = sorted_pair[0]
row['target'] = sorted_pair[1]
return row
df = df.apply(pair,axis=1)
然后您可以使用df.drop_duplicates()
source target weight
0 1 2 5
3 1 2 7
4 1 3 6
5 1 1 6