函数参数中的RDD [Vector]出错

时间:2016-05-13 15:18:10

标签: scala apache-spark apache-spark-mllib apache-spark-ml

我试图在scala中定义一个函数,用Spark迭代它。 这是我的代码:

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.ml.clustering.KMeans
import org.apache.spark.mllib.linalg.Vectors

import org.apache.spark.ml.feature.VectorIndexer
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.rdd._

    val assembler = new VectorAssembler()
          .setInputCols(Array("feature1", "feature2", "feature3"))
          .setOutputCol("features")
val assembled = assembler.transform(df)

// measures the average distance to centroid, for a model built with a given k.

def clusteringScore(data: RDD[Vector],k:Int) = {

val kmeans = new KMeans()
    .setK(k)
    .setFeaturesCol("features")
    .setPredictionCol("prediction")
    val model = kmeans.fit(data)

  val WSSSE = model.computeCost(data)   println(s"Within Set Sum of Squared Errors = $WSSSE")

}

(5 to 40 by 5).map(k => (k, clusteringScore(assembled, k))).
      foreach(println)

使用此代码我收到此错误:

type Vector takes type parameters

我不知道这个错误是什么意思......

1 个答案:

答案 0 :(得分:7)

您没有显示您的导入,但您可能正在导入Scala标准集合Vector(这个采用类型参数,例如Vector[Int])而不是SparkML Vector,是一个不同的类型,您应该像这样导入:

import org.apache.spark.mllib.linalg.Vector