import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.mllib.feature.HashingTF
import org.apache.spark.mllib.linalg.Vector
val sc: SparkContext = ...
// Load documents (one per line).
val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
val hashingTF = new HashingTF()
val tf: RDD[Vector] = hashingTF.transform(documents)
在尝试编译上面的代码片段时,我收到以下错误
[error] /siva/test/src/main/scala/com/chimpler/sparknaivebayesreuters/Tokenizer.scala:10: object feature is not a member of package org.apache.spark.mllib
[error] import org.apache.spark.mllib.feature.HashingTF
[error] ^
[error] /siva/test/src/main/scala/com/chimpler/sparknaivebayesreuters/Tokenizer.scala:36: not found: type HashingTF
[error] val hashingTF = new HashingTF()
[error] ^
[error] /siva/test/src/main/scala/com/chimpler/sparknaivebayesreuters/Tokenizer.scala:37: not found: value hasingTF
[error] val tf: RDD[Vector] = hasingTF.transform(documents)
[error] ^
[error] three errors found
[error] (compile:compile) Compilation failed
[error] Total time: 14 s, completed 3 Nov, 2014 1:57:31 PM
我在build.sbt文件中添加了以下行。
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.0.2" % "provided",
"org.apache.spark" %% "spark-mllib" % "1.0.2" % "provided")
// "org.apache.spark" %% "spark-streaming" % "1.0.0" % "provided")
任何指针?
答案 0 :(得分:1)
我使用的是错误的mllib版本。将libraryDependencies修改为spark-mllib 1.1.0修复它。