在火花中使用standford nlp,错误"找不到类java.util.function.Function - 继续使用存根。"

时间:2016-02-10 06:26:44

标签: java scala apache-spark nlp

我需要在spark 1.6中做一些文本预处理。得到Simplest method for text lemmatization in Scala and Spark的答案,import java.util.Properties需要答案。但是通过运行abt编译和汇编,我得到了以下错误:

[warn] Class java.util.function.Function not found - continuing with a stub.
[warn] Class java.util.function.Function not found - continuing with a stub.
[warn] Class java.util.function.Function not found - continuing with a stub.
[error] Class java.util.function.Function not found - continuing with a stub.
[error] Class java.util.function.Function not found - continuing with a stub.
[warn] four warnings found
[error] two errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 52 s, completed Feb 10, 2016 2:11:12 PM

代码如下:

 // ref https://stackoverflow.com/questions/30222559/simplest-methodfor-text-lemmatization-in-scala-and-spark?rq=1

 def plainTextToLemmas(text: String): Seq[String] = {

 import java.util.Properties

 import edu.stanford.nlp.ling.CoreAnnotations._
 import edu.stanford.nlp.pipeline._

 import scala.collection.JavaConversions._
 import scala.collection.mutable.ArrayBuffer
 //  val stopWords = Set("stopWord")

 val props = new Properties()
 props.put("annotators", "tokenize, ssplit, pos, lemma")
 val pipeline = new StanfordCoreNLP(props)
 val doc = new Annotation(text)
 pipeline.annotate(doc)
 val lemmas = new ArrayBuffer[String]()
 val sentences = doc.get(classOf[SentencesAnnotation])
 for (sentence <- sentences;
     token <- sentence.get(classOf[TokensAnnotation])) {
 val lemma = token.get(classOf[LemmaAnnotation])
 if (lemma.length > 2) {
    lemmas += lemma.toLowerCase
 }
}
   lemmas
}

我的sbt文件如下:

scalaVersion := "2.11.7"

crossScalaVersions := Seq("2.10.5", "2.11.0-M8")

libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.10" % "1.6.0" % "provided",
  "org.apache.spark" % "spark-mllib_2.10" % "1.6.0" % "provided",
  "org.apache.spark" % "spark-sql_2.10" % "1.6.0" % "provided",
  "com.github.scopt" % "scopt_2.10" % "3.3.0",
 )
    libraryDependencies ++= Seq(
  "edu.stanford.nlp" % "stanford-corenlp" % "3.5.2",
  "edu.stanford.nlp" % "stanford-corenlp" % "3.5.2" classifier "models"
  //   "edu.stanford.nlp" % "stanford-corenlp" % "3.5.2" classifier "models-chinese"
  //   "edu.stanford.nlp" % "stanford-corenlp" % "3.5.2" classifier "models-german"
  //   "edu.stanford.nlp" % "stanford-corenlp" % "3.5.2" classifier "models-spanish"
  //"com.google.code.findbugs" % "jsr305" % "2.0.3"
)
从网站上得到建议,我将java lib版本从1.7改为1.8,问题仍然存在。

enter image description here

1 个答案:

答案 0 :(得分:3)

通过将java home设置为java 8来解决问题。之前,我将项目SDK更改为java 8,而java home仍为7,因此在sbt编译时无法正常工作。