删除基于相似度测度熊猫的数据框行

时间:2019-08-01 00:36:37

标签: python pandas dataframe rows

我想消除数据框中的重复行。

我知道drop_duplicates()方法适用于删除具有相同子列值的行。但是,我想删除不相同但相似的行。例如,我有以下两行:

       Title        |   Area   |    Price
Apartment at Boston    100         150000
Apt at Boston          105         149000

我希望能够基于一些相似性度量来消除这两列,例如标题,区域和价格相差不到5%。说,我可以删除相似度> 0.95的行。这对于大型数据集特别有用,而不是手动逐行检查。我该如何实现?

2 个答案:

答案 0 :(得分:1)

这是使用difflib的函数。我从here.得到了类似的功能,您可能还想查看该页面上的一些答案,以确定适合您的用例的最佳相似性指标。

import pandas as pd
import numpy as np
df1 = pd.DataFrame({'Title':['Apartment at Boston','Apt at Boston'],
                  'Area':[100,105],
                  'Price':[150000,149000]})

def string_ratio(df,col,ratio):
    from difflib import SequenceMatcher
    import numpy as np
    def similar(a, b):
        return SequenceMatcher(None, a, b).ratio()
    ratios = []
    for i, x in enumerate(df[col]):
        a = np.array([similar(x, row) for row in df[col]])
        a = np.where(a < ratio)[0]
        ratios.append(len(a[a != i])==0)
    return pd.Series(ratios)

def numeric_ratio(df,col,ratio):
    ratios = []
    for i, x in enumerate(df[col]):
        a = np.array([min(x,row)/max(x,row) for row in df[col]])
        a = np.where(a<ratio)[0]
        ratios.append(len(a[a != i])==0)
    return pd.Series(ratios)

mask = ~((string_ratio(df1,'Title',.95))&(numeric_ratio(df1,'Area',.95))&(numeric_ratio(df1,'Price',.95)))

df1[mask]

它应该能够清除大多数类似的数据,尽管您可能想对string_ratio函数进行调整,如果它不适合您的情况。

答案 1 :(得分:1)

看看这是否满足您的需求

Title = ['Apartment at Boston', 'Apt at Boston', 'Apt at Chicago','Apt at   Seattle','Apt at Seattle','Apt at Chicago']
Area = [100, 105, 100, 102,101,101]
Price = [150000, 149000,150200,150300,150000,150000]
data = dict(Title=Title, Area=Area, Price=Price)
df = pd.DataFrame(data, columns=data.keys())

创建的df如下

Title 	Area 	Price
0 	Apartment at Boston 	100 	150000
1 	Apt at Boston 	105 	149000
2 	Apt at Chicago 	100 	150200
3 	Apt at Seattle 	102 	150300
4 	Apt at Seattle 	101 	150000
5 	Apt at Chicago 	101 	150000

现在,我们按以下方式运行代码

from fuzzywuzzy import fuzz
def fuzzy_compare(a,b):
    val=fuzz.partial_ratio(a,b)
    return val
tl = df["Title"].tolist()
itered=1
i=0
def do_the_thing(i):
    itered=i+1    
    while itered < len(tl):
        val=fuzzy_compare(tl[i],tl[itered])
        if val > 80:
            if abs((df.loc[i,'Area'])/(df.loc[itered,'Area']))>0.94 and abs((df.loc[i,'Area'])/(df.loc[itered,'Area']))<1.05:
                if abs((df.loc[i,'Price'])/(df.loc[itered,'Price']))>0.94 and abs((df.loc[i,'Price'])/(df.loc[itered,'Price']))<1.05:
                    df.drop(itered,inplace=True)
                    df.reset_index()
                    pass
                else:
                    pass
            else:
                pass            
       else:
            pass
       itered=itered+1    
while i < len(tl)-1:
    try:
        do_the_thing(i)
        i=i+1
    except:
        i=i+1
        pass
else:
    pass

输出为df,如下所示。当模糊匹配大于80并且“区域和价格”的值彼此相差5%以内时,将删除重复的波士顿和西雅图商品。

Title 	Area 	Price
0 	Apartment at Boston 	100 	150000
2 	Apt at Chicago 	100 	150200
3 	Apt at Seattle 	102 	150300

相关问题