Spark K-Means获得具有规范化的原始Cluster Center / Centroids

时间:2017-09-05 20:20:08

标签: scala apache-spark cluster-analysis normalization k-means

我跑了一个k-means模型

val kmeans = new KMeans().setK(k).setSeed(1L)
val model = kmeans.fit(train_dataset)

然后提取聚类中心(质心)

 var clusterCenters:Seq[(Double,Double,Double,Double,Double,Double,Double,Double,Double)] = Seq()
for(e <- model.clusterCenters){
  clusterCenters = clusterCenters :+ ((e(0)),e(1),e(2),e(3),e(4),e(5),e(6),e(7),e(8))
}

import sc.implicits._
var centroidsDF = clusterCenters.toDF()

将结果写回来我创建了一个生成的集群中心的DataFrame。

现在我遇到的问题是我事先规范化了数据以改善聚类结果。

 val scaler = new StandardScaler()
      .setInputCol("features")
      .setOutputCol("scaledFeatures")
      .setWithStd(true)
      .setWithMean(false)
    scalerModel = scaler.fit(train_dataset)
    scaledData = scalerModel.transform(train_dataset)

如何才能使原始形态的质心去标准化?

1 个答案:

答案 0 :(得分:3)

我不确定这样做是否有意义,但由于不要居中,你可以乘以std向量:

import org.apache.spark.ml.feature.ElementwiseProduct

val kmeans: KMeansModel = ???
val scaler: StandardScalerModel = ???

new ElementwiseProduct()
  .setScalingVec(scaler.std)  // Standard deviation used by scaler
  .setOutputCol("rescaled")
  .setInputCol("cluster")
  .transform(sc.parallelize(
    // Get centers and convert to `DataFrame`
    kmeans.clusterCenters.zipWithIndex).toDF("cluster", "id"))