剪影系数子采样是否在sklearn中分层?

时间:2013-12-18 11:19:40

标签: python machine-learning cluster-analysis scikit-learn

我再次使用scikit-learn剪影系数时遇到了麻烦。 (第一个问题在这里:silhouette coefficient in python with sklearn)。 我做了一个非常不平衡的聚类但是有很多个人所以我想使用剪影系数的采样参数。我想知道子采样是否分层,意味着对集群进行采样。我以虹膜数据集为例,但我的数据集要大得多(这就是我需要采样的原因)。 我的代码是:

from sklearn import datasets
from sklearn.metrics import *
iris = datasets.load_iris()
col = iris.feature_names
name = iris.target_names
X = pd.DataFrame(iris.data, columns = col)
y = iris.target
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

哪个有效。但现在如果我对此有偏见:

y[0:148] =0
y[148] = 1
y[149] = 2
print y
s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

我明白了:

ValueError                                Traceback (most recent call last)
<ipython-input-12-68a7fba49c54> in <module>()
      4 y[149] =2
      5 print y
----> 6 s = silhouette_score(X.values, y, metric='euclidean',sample_size=50)

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_score(X, labels, metric, sample_size, random_state, **kwds)
     82         else:
     83             X, labels = X[indices], labels[indices]
---> 84     return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
     85 
     86 

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in silhouette_samples(X, labels, metric, **kwds)
    146                   for i in range(n)])
    147     B = np.array([_nearest_cluster_distance(distances[i], labels, i)
--> 148                   for i in range(n)])
    149     sil_samples = (B - A) / np.maximum(A, B)
    150     # nan values are for clusters of size 1, and should be 0

/usr/local/lib/python2.7/dist-packages/sklearn/metrics/cluster/unsupervised.pyc in _nearest_cluster_distance(distances_row, labels, i)
    200     label = labels[i]
    201     b = np.min([np.mean(distances_row[labels == cur_label])
--> 202                for cur_label in set(labels) if not cur_label == label])
    203     return b

/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.pyc in amin(a, axis, out, keepdims)
   1980         except AttributeError:
   1981             return _methods._amin(a, axis=axis,
-> 1982                                 out=out, keepdims=keepdims)
   1983         # NOTE: Dropping the keepdims parameter
   1984         return amin(axis=axis, out=out)

/usr/lib/python2.7/dist-packages/numpy/core/_methods.pyc in _amin(a, axis, out, keepdims)
     12 def _amin(a, axis=None, out=None, keepdims=False):
     13     return um.minimum.reduce(a, axis=axis,
---> 14                             out=out, keepdims=keepdims)
     15 
     16 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):

ValueError: zero-size array to reduction operation minimum which has no identity

由于我认为抽样是随机的而不是分层的,所以它没有考虑到两个小集群。

我说错了吗?

3 个答案:

答案 0 :(得分:2)

是的,你是对的。采样不是分层的,因为在进行采样时不考虑标签。

这是采样的方式(版本0.14.1)

indices = random_state.permutation(X.shape[0])[:sample_size]

其中X是大小为[n_samples_a,n_samples_a]或[n_samples_a,n_features]的输入数组。

答案 1 :(得分:1)

我认为你是对的,当前的实现不支持平衡重采样。

答案 2 :(得分:0)

只是2020年的更新:

从scikit学习0.22.1开始,采样仍然是随机的(即未分层)。 源代码仍然是:

indices = random_state.permutation(X.shape[0])[:sample_size]