我想根据最低k-means分数,根据'k'参数选择k-means模型。
我可以手动找到'k'参数的最佳值,写出像
这样的东西 def clusteringScore0(data: DataFrame, k: Int): Double = {
val assembler = new VectorAssembler().
setInputCols(data.columns.filter(_ != "label")).
setOutputCol("featureVector")
val kmeans = new KMeans().
setSeed(Random.nextLong()).
setK(k).
setPredictionCol("cluster").
setFeaturesCol("featureVector")
val pipeline = new Pipeline().setStages(Array(assembler, kmeans))
val kmeansModel = pipeline.fit(data).stages.last.asInstanceOf[KMeansModel]
kmeansModel.computeCost(assembler.transform(data)) / data.count() }
(20 to 100 by 20).map(k => (k, clusteringScore0(numericOnly, k))).
foreach(println)
这样的事情:
val paramGrid = new ParamGridBuilder().addGrid(kmeansModel.k, 20 to 100 by 20).build()
val cv = new CrossValidator().setEstimator(pipeline).setEvaluator(new KMeansEvaluator()).setEstimatorParamMaps(paramGrid).setNumFolds(3)
有回归和分类的评估器,但没有用于聚类的评估器。
所以我应该实现Evaluator接口。我坚持使用evaluate
方法。
class KMeansEvaluator extends Evaluator {
override def copy(extra: ParamMap): Evaluator = defaultCopy(extra)
override def evaluate(data: Dataset[_]): Double = ??? // should I somehow adapt code from KMeansModel.computeCost()?
override val uid = Identifiable.randomUID("cost_evaluator")
}
答案 0 :(得分:4)
您好ClusteringEvaluator
可从Spark 2.3.0获得。您可以通过将ClusteringEvaluator对象包含在for循环中来查找最佳k值。您还可以在Scikit-learn page中找到轮廓分析的更多详细信息。简而言之,分数应在[-1,1]之间,分数越大越好。我为您的代码修改了以下for循环。
import org.apache.spark.ml.evaluation.ClusteringEvaluator
val evaluator = new ClusteringEvaluator()
.setFeaturesCol("featureVector")
.setPredictionCol("cluster")
.setMetricName("silhouette")
for(k <- 20 to 100 by 20){
clusteringScore0(numericOnly,k)
val transformedDF = kmeansModel.transform(numericOnly)
val score = evaluator.evaluate(transformedDF)
println(k,score,kmeansModel.computeCost(transformedDF))
}