我有一个KMeans集群脚本,它基于文本内容来组织一些文档。这些文档属于3个群集中的1个,但看起来非常不错,我想知道每个文档对群集的依赖性如何。
例如文档A在群集1中的匹配率为90%,文档B在群集1中的匹配率为45%。
因此,我可以创建某种阈值,说我只希望文档达到80%或更高。
dict_of_docs = {'Document A':'some text content',...'Document Z':'some more text content'}
# Vectorizing the data, my data is held in a Dict, so I just want the values.
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(dict_of_docs.values())
X = X.toarray()
# 3 Clusters as I know that there are 3, otherwise use Elbow method
# Then add the vectorized data to the Vocabulary
NUMBER_OF_CLUSTERS = 3
km = KMeans(
n_clusters=NUMBER_OF_CLUSTERS,
init='k-means++',
max_iter=500)
km.fit(X)
# First: for every document we get its corresponding cluster
clusters = km.predict(X)
# We train the PCA on the dense version of the tf-idf.
pca = PCA(n_components=2)
two_dim = pca.fit_transform(X)
scatter_x = two_dim[:, 0] # first principle component
scatter_y = two_dim[:, 1] # second principle component
plt.style.use('ggplot')
fig, ax = plt.subplots()
fig.set_size_inches(20,10)
# color map for NUMBER_OF_CLUSTERS we have
cmap = {0: 'green', 1: 'blue', 2: 'red'}
# group by clusters and scatter plot every cluster
# with a colour and a label
for group in np.unique(clusters):
ix = np.where(clusters == group)
ax.scatter(scatter_x[ix], scatter_y[ix], c=cmap[group], label=group)
ax.legend()
plt.xlabel("PCA 0")
plt.ylabel("PCA 1")
plt.show()
order_centroids = km.cluster_centers_.argsort()[:, ::-1]
# Print out top terms for each cluster
terms = vectorizer.get_feature_names()
for i in range(3):
print("Cluster %d:" % i, end='')
for ind in order_centroids[i, :10]:
print(' %s' % terms[ind], end='')
print()
for doc in dict_of_docs:
text = dict_of_docs[doc]
Y = vectorizer.transform([text])
prediction = km.predict(Y)
print(prediction, doc)
答案 0 :(得分:1)
我不认为可以完全按照自己的意愿做,因为k-means并不是真正的概率模型,而它的scikit-learn实现(这是我假设您正在使用的实现)只是没有提供正确的界面。
我建议的一种选择是使用KMeans.score
方法,该方法不提供概率输出,但提供的分数越大,分数越接近最近的聚类。您可以以此为阈值,例如说“文档A在聚类1中的分数为-.01,所以我保留它”或“文档B在聚类2中的分数为-1000,所以我忽略它”。 / p>
另一个选择是改为使用GaussianMixture
模型。高斯混合与k均值非常相似,它提供了GaussianMixture.predict_proba
所需的概率。