Spark Scala --- ML-Kmeans聚类预测专栏

时间:2018-12-15 17:32:38

标签: scala apache-spark apache-spark-sql apache-spark-dataset

在数据集中使用Kmeans算法后,我想在数据集中添加一个预测列,但我不知道该如何实现。以下是我到目前为止使用的代码(摘自Spark文档)

case class MyCase(sId: Int, tId:Int, label:Double, sAuthors:String, sYear:Int, sJournal:String,
tAuthors:String, tYear:Int,tJournal:String, yearDiff:Int,nCommonAuthors:Int,isSelfCitation:Boolean
              ,isSameJournal:Boolean,cosSimTFIDF:Double,sInDegrees:Int,sNeighbors:Array[Long],tInDegrees:Int ,tNeighbors:Array[Long],inDegreesDiff:Int,commonNeighbors:Int,jaccardCoefficient:Double)

val men = Encoders.product[MyCase]

val ds: Dataset[MyCase] = transformedTrainingSetDF.as(men)

//KMEANS
val numOfClusters = 2
val kmeans = new KMeans().setK(numOfClusters).setSeed(1L)
val model = kmeans.fit(ds)
// Evaluate clustering by computing Within Set Sum of Squared Errors.
val WSSSE = model.computeCost(ds)
println(s"Within Set Sum of Squared Errors = $WSSSE")
// Shows the result.
println("Cluster Centers: ")
model.clusterCenters.foreach(println)

1 个答案:

答案 0 :(得分:1)

使用KMeansModel.transform

def transform(dataset: Dataset[_]): DataFrame
     

转换输入数据集。

model.transform(ds)