KMeans聚类多维特征

时间:2019-07-09 20:02:16

标签: python scikit-learn nlp k-means word2vec

是否可以使用多维特征矩阵训练Kmeans ML模型?

我正在使用sklearn和KmeansClass进行聚类,使用Word2Vec提取单词包,并使用TreeTagger进行文本预处理

from gensim.models import Word2Vec
from sklearn.cluster import KMeans

lemmatized_words = [["be", "information", "contract", "residential"], ["can", "send", "package", "recovery"]

w2v_model = Word2Vec.load(wiki_path_model)

bag_of_words = [w2v_model.wv(phrase) for phrase in lemmatized_words]

#
#
# bag_of_words = [array([[-0.08796783,  0.08373307,  0.04610106, ...,  0.41964772,
#        -0.1733183 ,  0.09438939],
#       [ 0.11526374,  0.09092105, -0.2086806 , ...,  0.5205145 ,
#        -0.11455593, -0.05190944],
#       [-0.05140354,  0.09938619,  0.07485678, ...,  0.73840886,
#        -0.17298238,  0.09994634],
#       ...,
#       [-0.01144416, -0.17129216, -0.04012141, ...,  0.05281362,
#        -0.23109615,  0.02297313],
#       [-0.08355679,  0.24799444,  0.04348441, ...,  0.27940673,
#        -0.14400786, -0.09187686],
#       [ 0.11022831,  0.11035886,  0.19900796, ...,  0.12891224,
#        -0.09379898,  0.10538024]],dtype=float32)
#       array([[ 1.73330009e-01,  1.26429915e-01, -3.47578406e-01, ...,
#         8.09064806e-02, -3.02738965e-01, -1.61911864e-02],
#       [ 2.47227158e-02, -6.48087710e-02, -1.97364464e-01, ...,
#         1.35158226e-01,  1.72204189e-02, -1.14456110e-01],
#       [ 8.07424933e-02,  2.69261692e-02, -4.22120057e-02, ...,
#         1.01349883e-01, -1.94084793e-01, -2.64464412e-04],
#       ...,
#       [ 1.36009008e-01,  1.50609210e-01, -2.59797573e-01, ...,
#         1.84113771e-01, -6.85161874e-02, -1.04138054e-01],
#       [ 4.83367145e-02,  1.17820159e-01, -2.43335906e-02, ...,
#         1.33836940e-01, -1.55749675e-02, -1.18981823e-01],
#       [-6.68482706e-02,  4.57039356e-01, -2.20365867e-01, ...,
#         2.95841128e-01, -1.55933857e-01,  7.39804050e-03]], dtype=float32)
#       ]
#
#

model = KMeans(algorithm='auto0', max_iter=300, n_clusters=2)

model.fit(bag_of_words)

我希望对Kmeans进行培训,因此我可以存储模型并用于预测,但是会收到以下错误消息:

ValueError: setting an array element with a sequence.

1 个答案:

答案 0 :(得分:0)

您的问题出在w2v_model.wv(phrase)中。顾名思义,Word2vec模型可以应用于单词级别。要获取词组嵌入,您需要对该词组中所有单个单词的嵌入进行平均(或以其他方式汇总)。

所以您需要更换

bag_of_words = [w2v_model.wv(phrase) for phrase in lemmatized_words]

import numpy as np
bag_of_words = [np.mean([w2v_model.wv(word) for word in phrase], axis=0) for phrase in lemmatized_words]

对我来说,以下代码片段工作正常。它使用KeyedVectors而不是不推荐使用的Word2Vec,但是其余的都是相同的。

from gensim.models import KeyedVectors
from sklearn.cluster import KMeans
import numpy as np
lemmatized_words = [["be", "information", "contract", "residential"], ["can", "send", "package", "recovery"]]
w2v_model = KeyedVectors.load_word2vec_format(wiki_path_model, binary=True)  
bag_of_words = np.array([np.mean([w2v_model[word] for word in phrase if word in w2v_model], axis=0) for phrase in lemmatized_words])
print(bag_of_words.shape) # it should give (2, 300) for a 300-dimensional w2v
model = KMeans( max_iter=300, n_clusters=2)
model.fit(bag_of_words)

当然,平均(或其他聚合)会丢弃一些有关单词的信息,并且此信息可能对聚类有意义。但是没有聚合,就无法获得可比的词组嵌入,因为不同的词组可能具有不同的长度。

如果平均嵌入的聚类失败,我建议您寻找经过预训练的句子嵌入(例如Google的Universal Sentence Encoder,或者也许是BERT的嵌入)。