如何计算聚集聚类的准确性

时间:2018-09-09 17:25:09

标签: cluster-analysis hierarchical-clustering hierarchical

嗨,我在AgglomerativeClustering的python中使用了示例,我尝试估算性能,但是它会切换原始标签 我尝试比较make blob的预测标签y_hc和原始标签y

import scipy.cluster.hierarchy as sch
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets import make_blobs
import numpy as np
import matplotlib.pyplot as plt
data,y = make_blobs(n_samples=300, n_features=2, centers=4, cluster_std=2, random_state=50)
plt.figure(2)
# create dendrogram
dendrogram = sch.dendrogram(sch.linkage(data, method='ward'))
plt.title('dendrogram')

# create clusters linkage="average", affinity=metric , linkage = 'ward' affinity = 'euclidean'
hc = AgglomerativeClustering(n_clusters=4, linkage="average", affinity='euclidean')

# save clusters for chart
y_hc = hc.fit_predict(data,y)

plt.figure(3)

# create scatter plot
plt.scatter(data[y==0,0], data[y==0,1], c='red', s=50)
plt.scatter(data[y==1, 0], data[y==1, 1], c='black', s=50)
plt.scatter(data[y==2, 0], data[y==2, 1], c='blue', s=50)
plt.scatter(data[y==3, 0], data[y==3, 1], c='cyan', s=50)

plt.xlim(-15,15)
plt.ylim(-15,15)


plt.scatter(data[y_hc ==0,0], data[y_hc == 0,1], s=10, c='red')
plt.scatter(data[y_hc==1,0], data[y_hc == 1,1], s=10, c='black')
plt.scatter(data[y_hc ==2,0], data[y_hc == 2,1], s=10, c='blue')
plt.scatter(data[y_hc ==3,0], data[y_hc == 3,1], s=10, c='cyan')
for ii in range(4):
        print(ii)
        i0=y_hc==ii
        counts = np.bincount(y[i0])
        valCountAtorgLbl = (np.argmax(counts))
        accuracy0Tp=100*np.max(counts)/y[y==valCountAtorgLbl].shape[0]
        accuracy0Fp = 100 * np.min(counts) / y[y ==valCountAtorgLbl].shape[0]

print([accuracy0Tp,accuracy0Fp])
plt.show()

cluster

1 个答案:

答案 0 :(得分:1)

群集可以复制原始的标签,而不能复制原始的标签

您似乎假设集群1对应于标签1(在faftz中,可以将其标记为“ iris setosa”,并且显然没有办法使用该集群名称来提出无监督算法...)。通常不会-那里的簇和类的数量也可能不相同,并且可能存在未标记的噪声piintsl。您可以使用匈牙利算法来计算最佳映射(或者只是贪婪的匹配)以产生更多的直观的颜色映射。