如果嵌套字典前面没有键,我现在可以删除重复项。我使用此函数的dicts列表的一个示例是:
[{'asndb_prefix': '164.39.xxx.0/17',
'cidr': '164.39.xxx.0/17',
'cymru_asn': 'XXX',
'cymru_country': 'GB',
'cymru_owner': 'XXX , GB',
'cymru_prefix': '164.39.xxx.0/17',
'ips': ['164.39.xxx.xxx'],
'network_id': '164.39.xxx.xxx/24',},
{'asndb_prefix': '54.192.xxx.xxx/16',
'cidr': '54.192.0.0/16',
'cymru_asn': '16509',
'cymru_country': 'US',
'cymru_owner': 'AMAZON-02 - Amazon.com, Inc., US',
'cymru_prefix': '54.192.144.0/22',
'ips': ['54.192.xxx.xxx', '54.192.xxx.xxx'],
'network_id': '54.192.xxx.xxx/24',
}]
def remove_dict_duplicates(list_of_dicts):
"""
"" Remove duplicates in dict
"""
list_of_dicts = [dict(t) for t in set([tuple(d.items()) for d in list_of_dicts])]
# remove the {} before and after - not sure why these are placed as
# the first and last element
return list_of_dicts[1:-1]
但是,我希望能够根据密钥和该字典中关联的所有值删除重复项。因此,如果内部具有不同值的相同键我想不删除它,但如果有完整副本则将其删除。
[{'50.16.xxx.0/24': {'asndb_prefix': '50.16.0.0/16',
'cidr': '50.16.0.0/14',
'cymru_asn': 'xxxx',
'cymru_country': 'US',
'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
'cymru_prefix': '50.16.0.0/16',
'ip': '50.16.221.xxx',
'network_id': '50.16.xxx.0/24',
'pyasn_asn': xxxx,
'whois_asn': 'xxxx'}},
// This would be removed
{'50.16.xxx.0/24': {'asndb_prefix': '50.16.0.0/16',
'cidr': '50.16.0.0/14',
'cymru_asn': 'xxxxx',
'cymru_country': 'US',
'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
'cymru_prefix': '50.16.0.0/16',
'ip': '50.16.221.xxx',
'network_id': '50.16.xxx.0/24',
'pyasn_asn': xxxx,
'whois_asn': 'xxxx'}},
// This would NOT be removed
{'50.16.xxx.0/24': {'asndb_prefix': '50.999.0.0/16',
'cidr': '50.999.0.0/14',
'cymru_asn': 'xxxx',
'cymru_country': 'US',
'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
'cymru_prefix': '50.16.0.0/16',
'ip': '50.16.221.xxx',
'network_id': '50.16.xxx.0/24',
'pyasn_asn': xxxx,
'whois_asn': 'xxxx'}}]
我该怎么做呢?谢谢。
答案 0 :(得分:2)
从序列列表中删除重复项:
list_of_unique_dicts = []
for dict_ in list_of_dicts:
if dict_ not in list_of_unique_dicts:
list_of_unique_dicts.append(dict_)
答案 1 :(得分:1)
如果结果中的顺序不重要,您可以使用一组来通过将dicts转换为冻结集来删除重复项:
def remove_dict_duplicates(list_of_dicts):
"""
Remove duplicates.
"""
packed = set(((k, frozenset(v.items())) for elem in list_of_dicts for
k, v in elem.items()))
return [{k: dict(v)} for k, v in packed]
这假设最里面的dicts的所有值都是可以清除的。
放弃订单会产生大型列表的潜在加速。 例如,创建包含100,000个元素的列表:
inner = {'asndb_prefix': '50.999.0.0/16',
'cidr': '50.999.0.0/14',
'cymru_asn': '14618',
'cymru_country': 'US',
'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
'cymru_prefix': '50.16.0.0/16',
'ip': '50.16.221.xxx',
'network_id': '50.16.xxx.0/24',
'pyasn_asn': 14618,
'whois_asn': '14618'}
large_list = list_of_dicts + [{x: inner} for x in range(int(1e5))]
一次又一次地检查结果列表中的重复项需要花费很长时间:
def remove_dupes(list_of_dicts):
"""Source: answer from wim
"""
list_of_unique_dicts = []
for dict_ in list_of_dicts
if dict_ not in list_of_unique_dicts:
list_of_unique_dicts.append(dict_)
return list_of_unique_dicts
%timeit remove_dupes(large_list
1 loop, best of 3: 2min 55s per loop
我的方法,使用集合更快一点:
%timeit remove_dict_duplicates(large_list)
1 loop, best of 3: 590 ms per loop