K表示有条件

时间:2017-05-10 09:11:23

标签: scikit-learn cluster-analysis k-means

我想将K均值(或任何其他简单的聚类算法)应用于具有两个变量的数据,但我希望聚类尊重条件:每个聚类的第三个变量的总和> SOME_VALUE。 这可能吗?

3 个答案:

答案 0 :(得分:1)

符号:
- K是簇的数量
- 让我们说前两个变量是点坐标(x,y)
- V表示第三个变量
- Ci:每个簇上的V之和i - S总和(总和Ci)
- 和阈值T

问题定义:
根据我的理解,目的是运行一种算法,在尊重约束的同时保持kmeans的精神。

任务1 - 靠近质心分组点[kmeans]
任务2 - 对于每个簇i,Ci> T * [约束]

约束问题的常规kmeans限制:
一个普通的kmeans,通过以任意顺序获取它们来为质心分配点。在我们的例子中,这将导致Ci的不受控制的增长,同时增加点数。

例如,K = 2,T = 40且4点,第三变量等于V1 = 50,V2 = 1,V3 = 50,V4 = 50。 假设点P1,P3,P4更靠近质心1.点P2更靠近质心2.

让我们进行普通kmeans的分配步骤并跟踪Ci:
1--取点P1,将其分配给簇1. C1 = 50> Ť
2--取点P2,将其分配给簇2 C2 = 1
3--取点P3,将其分配给簇1. C1 = 100> T => C1增长太多了!
4--取点P4,将其分配给簇1. C1 = 150> T => !!!

修改过的kmeans:
在之前的版本中,我们希望阻止C1过度增长并帮助C2增长。

这就像把香槟倒入几杯酒中一样:如果你看到一杯香槟少的酒,你就去填充它。你这样做是因为你有限制:有限的香槟(S有界),因为你希望每一杯都有足够的香槟(Ci> T)。

当然这只是一个类比。我们修改的kmeans将使用最小的Ci向集群添加新的poins,直到达到约束(Task2)。现在我们应该按顺序添加积分?靠近质心(Task1)。在为所有集群i实现所有约束之后,我们可以在剩余的未分配点上运行常规kmeans。

实施
接下来,我给出了修改算法的python实现。图1显示了使用透明度表示大VS低值的第三个变量的再现。图2显示了使用颜色的进化簇。

您可以使用accept_thresh参数。特别要注意:
对于accept_thresh = 0 =>常规kmeans(立即达到约束)
对于accept_thresh = third_var.sum()。sum()/(2 * K),您可能会发现由于约束原因,更接近给定质心的某些点会受到另一个点的影响。

代码

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
import time

nb_samples = 1000
K = 3  # for demo purpose, used to generate cloud points
c_std = 1.2

# Generate test samples :
points, classes = datasets.make_blobs(n_features=2, n_samples=nb_samples, \
                                      centers=K, cluster_std=c_std)

third_var_distribution = 'cubic_bycluster'  # 'uniform'

if third_var_distribution == 'uniform':
    third_var = np.random.random((nb_samples))
elif third_var_distribution == 'linear_bycluster':
    third_var = np.random.random((nb_samples))
    third_var = third_var * classes
elif third_var_distribution == 'cubic_bycluster':
    third_var = np.random.random((nb_samples))
    third_var = third_var * classes


# Threshold parameters :
# Try with K=3 and :
# T = K => one cluster reach cosntraint, two clusters won't converge
# T = 2K =>
accept_thresh = third_var.sum().sum() / (2*K)


def dist2centroids(points, centroids):
    '''return arrays of ordered points to each centroids
       first array is index of points
       second array is distance to centroid
       dim 0 : centroid
       dim 1 : distance or point index
    '''
    dist = np.sqrt(((points - centroids[:, np.newaxis]) ** 2).sum(axis=2))
    ord_dist_indices = np.argsort(dist, axis=1)

    ord_dist_indices = ord_dist_indices.transpose()
    dist = dist.transpose()

    return ord_dist_indices, dist


def assign_points_with_constraints(inds, dists, tv, accept_thresh):
    assigned = [False] * nb_samples
    assignements = np.ones(nb_samples, dtype=int) * (-1)
    cumul_third_var = np.zeros(K, dtype=float)
    current_inds = np.zeros(K, dtype=int)

    max_round = nb_samples * K

    for round in range(0, max_round):  # we'll break anyway
        # worst advanced cluster in terms of cumulated third_var :
        cluster = np.argmin(cumul_third_var)

        if cumul_third_var[cluster] > accept_thresh:
            continue  # cluster had enough samples

        while current_inds[cluster] < nb_samples:
            # add points to increase cumulated third_var on this cluster
            i_inds = current_inds[cluster]
            closest_pt_index = inds[i_inds][cluster]

            if assigned[closest_pt_index] == True:
                current_inds[cluster] += 1
                continue  # pt already assigned to a cluster

            assignements[closest_pt_index] = cluster
            cumul_third_var[cluster] += tv[closest_pt_index]
            assigned[closest_pt_index] = True
            current_inds[cluster] += 1

            new_cluster = np.argmin(cumul_third_var)
            if new_cluster != cluster:
                break

    return assignements, cumul_third_var


def assign_points_with_kmeans(points, centroids, assignements):
    new_assignements = np.array(assignements, copy=True)

    count = -1
    for asg in assignements:
        count += 1

        if asg > -1:
            continue

        pt = points[count, :]

        distances = np.sqrt(((pt - centroids) ** 2).sum(axis=1))
        centroid = np.argmin(distances)

        new_assignements[count] = centroid

    return new_assignements


def move_centroids(points, labels):
    centroids = np.zeros((K, 2), dtype=float)

    for k in range(0, K):
        centroids[k] = points[assignements == k].mean(axis=0)

    return centroids


rgba_colors = np.zeros((third_var.size, 4))
rgba_colors[:, 0] = 1.0
rgba_colors[:, 3] = 0.1 + (third_var / max(third_var))/1.12
plt.figure(1, figsize=(14, 14))
plt.title("Three blobs", fontsize='small')
plt.scatter(points[:, 0], points[:, 1], marker='o', c=rgba_colors)

# Initialize centroids
centroids = np.random.random((K, 2)) * 10
plt.scatter(centroids[:, 0], centroids[:, 1], marker='X', color='red')

# Step 1 : order points by distance to centroid :
inds, dists = dist2centroids(points, centroids)

# Check if clustering is theoriticaly possible :
tv_sum = third_var.sum()
tv_max = third_var.max()
if (tv_max > 1 / 3 * tv_sum):
    print("No solution to the clustering problem !\n")
    print("For one point : third variable is too high.")
    sys.exit(0)

stop_criter_eps = 0.001
epsilon = 100000
prev_cumdist = 100000

plt.figure(2, figsize=(14, 14))
ln, = plt.plot([])
plt.ion()
plt.show()

while epsilon > stop_criter_eps:

    # Modified kmeans assignment :
    assignements, cumul_third_var = assign_points_with_constraints(inds, dists, third_var, accept_thresh)

    # Kmeans on remaining points :
    assignements = assign_points_with_kmeans(points, centroids, assignements)

    centroids = move_centroids(points, assignements)

    inds, dists = dist2centroids(points, centroids)

    epsilon = np.abs(prev_cumdist - dists.sum().sum())

    print("Delta on error :", epsilon)

    prev_cumdist = dists.sum().sum()

    plt.clf()
    plt.title("Current Assignements", fontsize='small')
    plt.scatter(points[:, 0], points[:, 1], marker='o', c=assignements)
    plt.scatter(centroids[:, 0], centroids[:, 1], marker='o', color='red', linewidths=10)
    plt.text(0,0,"THRESHOLD T = "+str(accept_thresh), va='top', ha='left', color="red", fontsize='x-large')
    for k in range(0, K):
        plt.text(centroids[k, 0], centroids[k, 1] + 0.7, "Ci = "+str(cumul_third_var[k]))
    plt.show()
    plt.pause(1)

改进
- 使用第三个变量的分布进行分配 - 管理算法的分歧
- 更好的初始化(kmeans ++)

答案 1 :(得分:0)

处理此问题的一种方法是在群集之前过滤数据

>>> cluster_data = df.loc[df['third_variable'] > some_value]

>>> from sklearn.cluster import KMeans
>>> y_pred = KMeans(n_clusters=2).fit_predict(cluster_data) 

如果总和是指每个群集的第三个变量的总和,那么您可以使用RandomSearchCV来查找符合或不符合条件的KMeans超参数。

答案 2 :(得分:0)

K-means本身就是一个优化问题。

您的附加约束也是一种相当常见的优化约束。

所以我宁愿用优化求解器来解决这个问题。