在将文本向量参数输入到sklearn之前,如何将其与其他参数组合?

时间:2019-03-28 12:41:05

标签: python scikit-learn tfidfvectorizer

我正在尝试在聚类之前组合两种类型的参数。

我的参数是Text-表示为稀疏矩阵, 还有另一个数组代表我的数据点的其他特征。

我尝试将2种类型的参数组合为1个数组,并将其作为输入传递给算法:

db = DBSCAN(eps=1, min_samples=3, metric=get_distance).fit(array(combined_list))

我还建立了我将要使用的自定义距离度量。

def get_distance(vec1,vec2):
    text_distance = cosine_similarity(vec1[0] ,vec2[0])
    other_distance = vec1[1]-vec2[1]

    return (text_distance+other_distance)/2

但是尝试传递输入数组时出现错误。 组合数组的构造如下:

combined_list = []
for i in range(len(hashes_list)):
    combined_list.append((hashes_list[i],text_list[i]))

combined_list = array(combined_list)

完整错误回溯:

db = DBSCAN(eps=1, min_samples=3, metric=get_distance ).fit(array(combined_list))

Traceback (most recent call last):
  File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
    exec(exp, global_vars, local_vars)
  File "<input>", line 1, in <module>
  File "/Users/tal/src/campaign_detection/Data_Extractor/venv/lib/python3.7/site-packages/sklearn/cluster/dbscan_.py", line 319, in fit
    X = check_array(X, accept_sparse='csr')
  File "/Users/tal/src/campaign_detection/Data_Extractor/venv/lib/python3.7/site-packages/sklearn/utils/validation.py", line 527, in check_array
    array = np.asarray(array, dtype=dtype, order=order)
  File "/Users/tal/src/campaign_detection/Data_Extractor/venv/lib/python3.7/site-packages/numpy/core/numeric.py", line 538, in asarray
    return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.

这是将文本向量与其他参数组合在一起的正确方法吗?

1 个答案:

答案 0 :(得分:0)

对于您的方法,我有几点建议。

  1. DBSCAN的输入必须使用2D数组而不是元组。因此,您必须拼合输入数据。

From Documentation

  

X:形状(n_samples,n_features个)的数组或稀疏(CSR)矩阵或形状的数组   (n_samples,n_samples)

  1. get_distance()必须返回单个值而不是数组。因此,我建议您对非文本功能使用一些措施。我举了一个欧几里得距离的例子。

示例:

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> corpus = [
...     'This is the first document.',
...     'This document is the second document.',
...     'And this is the third one.',
...     'Is this the first document?',
... ]
>>> vectorizer = TfidfVectorizer()
>>> text_list = vectorizer.fit_transform(corpus)


import numpy as np
hashes_list = np.array([[12,12,12],
               [12,13,11],
               [12,1,16],
               [4,8,11]])

from scipy.sparse import hstack
combined_list = hstack((hashes_list,text_list))

from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics.pairwise import euclidean_distances

from sklearn.cluster import DBSCAN

n1 = len(vectorizer.get_feature_names())

def get_distance(vec1,vec2):
    text_distance = cosine_similarity([vec1[:n1]], [vec2[:n1]])
    other_distance = euclidean_distances([vec1[n1:]], [vec2[n1:]])
    return (text_distance+other_distance)/2

db = DBSCAN(eps=1, min_samples=3, metric=get_distance ).fit(combined_list.toarray())