如何使用Pyspark中的kmeans标记正确的预测聚类观察结果?

时间:2017-11-10 11:31:19

标签: pyspark cluster-analysis apache-spark-mllib

我想了解k-means方法在PySpark中是如何工作的。 为此,我已经完成了这个小例子:

In [120]: entry = [ [1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5],[5,5,5],[5,5,5],[1,1,1],[5,5,5]]

In [121]: rdd_entry = sc.parallelize(entry)

In [122]: clusters = KMeans.train(rdd_entry, k=5, maxIterations=10, initializationMode="random")

In [123]:  rdd_labels = clusters.predict(rdd_entry)

In [125]: rdd_labels.collect()
Out[125]: [3, 1, 0, 0, 2, 2, 2, 3, 2]

In [126]: entry
Out[126]:
[[1, 1, 1],
 [2, 2, 2],
 [3, 3, 3],
 [4, 4, 4],
 [5, 5, 5],
 [5, 5, 5],
 [5, 5, 5],
 [1, 1, 1],
 [5, 5, 5]]

乍一看似乎rdd_labels返回每个观察所属的集群,尊重原始rdd的顺序。虽然在这个例子中很明显,但如果能够处理800万次观察,我怎能确定?

此外,我想知道如何加入rdd_entry和rdd_labels,尊重该顺序,以便rdd_entry的每个观察都正确标记其集群。 我试图做一个.join(),但它跳错了

In [127]: rdd_total = rdd_entry.join(rdd_labels)

In [128]: rdd_total.collect()

TypeError: 'int' object has no attribute '__getitem__'

1 个答案:

答案 0 :(得分:1)

希望它有所帮助! (此解决方案基于pyspark.ml

from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler

#sample data
df = sc.parallelize([[1,1,1],[2,2,2],[3,3,3],[4,4,4],[5,5,5],[5,5,5],[5,5,5],[1,1,1],[5,5,5]]).\
    toDF(('col1','col2','col3'))

vecAssembler = VectorAssembler(inputCols=df.columns, outputCol="features")
vector_df = vecAssembler.transform(df)

#kmeans clustering
kmeans=KMeans(k=3, seed=1)
model=kmeans.fit(vector_df)
predictions=model.transform(vector_df)
predictions.show()

输出是:

+----+----+----+-------------+----------+
|col1|col2|col3|     features|prediction|
+----+----+----+-------------+----------+
|   1|   1|   1|[1.0,1.0,1.0]|         0|
|   2|   2|   2|[2.0,2.0,2.0]|         0|
|   3|   3|   3|[3.0,3.0,3.0]|         2|
|   4|   4|   4|[4.0,4.0,4.0]|         1|
|   5|   5|   5|[5.0,5.0,5.0]|         1|
|   5|   5|   5|[5.0,5.0,5.0]|         1|
|   5|   5|   5|[5.0,5.0,5.0]|         1|
|   1|   1|   1|[1.0,1.0,1.0]|         0|
|   5|   5|   5|[5.0,5.0,5.0]|         1|
+----+----+----+-------------+----------+