如何找到每个客户的相似地址数量?

时间:2018-06-22 20:29:16

标签: python numpy scikit-learn nlp nltk

我有一个包含两列的数据集:客户idaddresses

id      addresses
1111    asturias 32, benito juarez, CDMX
1111    JOSE MARIA VELASCO, CDMX
1111    asturias 32 DEPT 401, INSURGENTES, CDMX
1111    deportes
1111    asturias 32, benito juarez, MIXCOAC, CDMX
1111    cd. de los deportes
1111    deportes, wisconsin
2222    TORRE REFORMA LATINO, CDMX
2222    PERISUR 2890
2222    WE WORK, CDMX
2222    WEWORK, TORRE REFORMA LATINO, CDMX
2222    PERISUR: 2690, COYOCAN
2222    TORRE REFORMA LATINO

我有兴趣为每个客户找到不同的地址数量。例如,对于客户id 1111,有3个不同的地址:

  1. [asturias 32, benito juarez, CDMX, asturias 32 DEPT 401, INSURGENTES, CDMX, asturias 32, benito juarez, MIXCOAC, CDMX]

  2. [JOSE MARIA VELASCO, CDMX]

  3. [deportes, cd. de los deportes, deportes, wisconsin]

我用python写的代码只能显示连续两行之间的相似性:行i和行i+1(分数0表示完全不同,而1表示完全相似)。

id      addresses                                  score
1111    asturias 32, benito juarez, CDMX             0
1111    JOSE MARIA VELASCO, CDMX                     0
1111    asturias 32 DEPT 401, INSURGENTES, CDMX      0
1111    deportes                                     0
1111    asturias 32, benito juarez, MIXCOAC, CDMX    0
1111    cd. de los deportes                          0.21
1111    deportes, wisconsin                          0
2222    TORRE REFORMA LATINO, CDMX                   0
2222    PERISUR 2890                                 0
2222    WE WORK, CDMX                                0.69
2222    WEWORK, TORRE REFORMA LATINO, CDMX           0
2222    PERISUR: 2690, COYOCAN                       0
2222    TORRE REFORMA LATINO

如果得分> 0.20,我正在考虑他们两个不同的地址。以下是我的代码:

import nltk
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re
import unicodedata
import unidecode
import string
from sklearn.feature_extraction.text import TfidfVectorizer

data=pd.read_csv('address.csv')
nltk.download('punkt')
stemmer = nltk.stem.porter.PorterStemmer()
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

def stem_tokens(tokens):
    return [stemmer.stem(item) for item in tokens]

'''remove punctuation, lowercase, stem'''
def normalize(text):
    return stem_tokens(
        nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')

def cosine_sim(text1, text2):
    tfidf = vectorizer.fit_transform([text1, text2])
    return ((tfidf * tfidf.T).A)[0, 1]

cnt = np.array(np.arange(0, 5183))
indx = []

for i in cnt:
    print cosine_sim(data['address'][i], data['address'][i + 1])

但是上面的代码无法比较特定客户id的每个可能的行。我想要如下输出:

id     unique address
1111    3
2222    3
3333    2

1 个答案:

答案 0 :(得分:1)

您可以为此目的在itertools中使用组合。请参阅下面的比较代码。

请注意,我使用了用分号分隔的CSV文件

此外,如果需要,您可以使用similarity中的SPACY函数来查找两个短语之间的相似性。在这里,我使用了您提供的相同功能。

import nltk
import numpy as np
import pandas as pd
import itertools
import string
from sklearn.feature_extraction.text import TfidfVectorizer


def stem_tokens(tokens):
    return [stemmer.stem(item) for item in tokens]

'''remove punctuation, lowercase, stem'''
def normalize(text):
    return stem_tokens(
        nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

def cosine_sim(text1, text2):
    tfidf = vectorizer.fit_transform([text1, text2])
    return ((tfidf * tfidf.T).A)[0, 1]

def group_addresses(addresses):
    '''merge the lists if they have an element in common'''
    out = []
    while len(addresses)>0:
        # first, *rest = addresses  # for python 3
        first, rest = addresses[0], addresses[1:]  # for python2
        first = set(first)
        lf = -1
        while len(first)>lf:
            lf = len(first)

            rest2 = []
            for r in rest:
                if len(first.intersection(set(r)))>0:
                    first |= set(r)
                else:
                    rest2.append(r)     
            rest = rest2

        out.append(first)
        addresses = rest
    return out


df=pd.read_csv("address.csv", sep=";")
stemmer = nltk.stem.porter.PorterStemmer()
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')

sim_df = pd.DataFrame(columns=['id', 'unique address'])

for customer in set(df['id']):
    customer_addresses = (df.loc[df['id'] == customer]['addresses'])    #Get the addresses of each customer
    all_entries = [[adr] for adr in customer_addresses]    #Make list of lists
    sim_pairs = [list((text1, text2)) for text1, text2 in itertools.combinations(customer_addresses, 2) if cosine_sim(text1, text2) >0.2 ]      # Find all pairs whose similiarty is greater than 0.2
    all_entries.extend(sim_pairs)
    sim_pairs = group_addresses(all_entries)
    print(customer , len(sim_pairs))

输出类似于

2222 2
1111 3

形成的组是

2222
['WE WORK, CDMX', 'WEWORK, TORRE REFORMA LATINO, CDMX', 'TORRE REFORMA LATINO, CDMX', 'TORRE REFORMA LATINO']
['PERISUR 2890', 'PERISUR: 2690, COYOCAN']

1111
['asturias 32 DEPT 401, INSURGENTES, CDMX', 'asturias 32, benito juarez, MIXCOAC, CDMX', 'asturias 32, benito juarez, CDMX']
['JOSE MARIA VELASCO, CDMX']
['deportes, wisconsin', 'cd. de los deportes', 'deportes']