Python聚类'纯度'指标

时间:2015-12-02 16:14:17

标签: python scikit-learn cluster-analysis

我正在使用sklearn.mixture中的Gaussian Mixture Model (GMM)来执行数据集的聚类。

我可以使用函数score()来计算模型下的对数概率。

但是,我正在寻找一种名为“纯度”的指标,该指标在this article中定义。

如何在Python中实现它?我目前的实现如下:

from sklearn.mixture import GMM

# X is a 1000 x 2 array (1000 samples of 2 coordinates).
# It is actually a 2 dimensional PCA projection of data
# extracted from the MNIST dataset, but this random array
# is equivalent as far as the code is concerned.
X = np.random.rand(1000, 2)

clusterer = GMM(3, 'diag')
clusterer.fit(X)
cluster_labels = clusterer.predict(X)

# Now I can count the labels for each cluster..
count0 = list(cluster_labels).count(0)
count1 = list(cluster_labels).count(1)
count2 = list(cluster_labels).count(2)

但我无法循环遍历每个群集以计算混淆矩阵(根据此question

4 个答案:

答案 0 :(得分:8)

大卫的答案有效,但这是另一种方法。

import numpy as np
from sklearn import metrics

def purity_score(y_true, y_pred):
    # compute contingency matrix (also called confusion matrix)
    contingency_matrix = metrics.cluster.contingency_matrix(y_true, y_pred)
    # return purity
    return np.sum(np.amax(contingency_matrix, axis=0)) / np.sum(contingency_matrix) 

此外,如果需要计算逆纯度,只需将“ axis = 0” 替换为“ axis = 1”

答案 1 :(得分:4)

sklearn未实现群集纯度指标。您有两个选择:

  1. 自己使用sklearn数据结构实施测量。 Thisthis有一些用于测量纯度的python源,但是您的数据或函数体需要进行调整才能相互兼容。

  2. 使用(非常不太成熟)PML库,它确实实现了群集纯度。

答案 2 :(得分:4)

非常晚的贡献。

您可以尝试像这样实现它,就像在gist

中一样
def purity_score(y_true, y_pred):
    """Purity score
        Args:
            y_true(np.ndarray): n*1 matrix Ground truth labels
            y_pred(np.ndarray): n*1 matrix Predicted clusters

        Returns:
            float: Purity score
    """
    # matrix which will hold the majority-voted labels
    y_voted_labels = np.zeros(y_true.shape)
    # Ordering labels
    ## Labels might be missing e.g with set like 0,2 where 1 is missing
    ## First find the unique labels, then map the labels to an ordered set
    ## 0,2 should become 0,1
    labels = np.unique(y_true)
    ordered_labels = np.arange(labels.shape[0])
    for k in range(labels.shape[0]):
        y_true[y_true==labels[k]] = ordered_labels[k]
    # Update unique labels
    labels = np.unique(y_true)
    # We set the number of bins to be n_classes+2 so that 
    # we count the actual occurence of classes between two consecutive bins
    # the bigger being excluded [bin_i, bin_i+1[
    bins = np.concatenate((labels, [np.max(labels)+1]), axis=0)

    for cluster in np.unique(y_pred):
        hist, _ = np.histogram(y_true[y_pred==cluster], bins=bins)
        # Find the most present label in the cluster
        winner = np.argmax(hist)
        y_voted_labels[y_pred==cluster] = winner

    return accuracy_score(y_true, y_voted_labels)

答案 3 :(得分:0)

currently top voted answer正确地实现了纯度指标,但并不是在所有情况下都是最合适的指标,因为它不能确保每个预测的簇标签仅分配给一个真实标签一次。

例如,考虑一个非常不平衡的数据集,其中一个标签有99个示例,另一个标签有1个示例。然后,任何聚类(例如:具有两个相等的大小为50的聚类)将获得至少0.99的纯度,从而使其成为无用的指标。​​

相反,在簇数与标签数相同的情况下,簇精度可能更合适。这具有在无监督的情况下镜像分类精度的优点。要计算聚类准确性,我们需要使用Hungarian algorithm来找到聚类标签和真实标签之间的最佳匹配。 SciPy函数linear_sum_assignment执行此操作:

import numpy as np
from sklearn import metrics
from scipy.optimize import linear_sum_assignment

def cluster_accuracy(y_true, y_pred):
    # compute contingency matrix (also called confusion matrix)
    contingency_matrix = metrics.cluster.contingency_matrix(y_true, y_pred)

    # Find optimal one-to-one mapping between cluster labels and true labels
    row_ind, col_ind = linear_sum_assignment(-contingency_matrix)

    # Return cluster accuracy
    return contingency_matrix[row_ind, col_ind].sum() / np.sum(contingency_matrix)