线程“main”java.lang.ExceptionInInitializerError中的异常,同时执行spark RandomForestClassificationExample

时间:2018-03-10 10:05:26

标签: scala apache-spark machine-learning playframework

我是mlib的新手。我想运行spark mlib的例子。我创建了新的游戏框架项目。并添加了以下依赖项:

  

name:=“”“EnsembleAI”“”

     

version:=“1.0-SNAPSHOT”

     

lazy val root =(项目在文件中(“。”))。enablePlugins(PlayScala)   scalaVersion:=“2.11.7”   libraryDependencies ++ = Seq(jdbc,cache,ws,
  “org.scalatestplus.play”%%“scalatestplus-play”%“1.5.1”%测试,
  “org.apache.spark”%%“spark-core”%“2.3.0”,“org.apache.spark”%%   “spark-sql”%“2.3.0”,“org.apache.spark”%%“spark-mllib”%“2.3.0”   )

当我运行以下示例时:

package lib

import org.apache.spark.{SparkConf, SparkContext}
// $example on$
import org.apache.spark.mllib.tree.RandomForest
import org.apache.spark.mllib.tree.model.RandomForestModel
import org.apache.spark.mllib.util.MLUtils
// $example off$

object RandomForestClassificationExample {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local[1]").setAppName("RandomForestClassificationExample")
    val sc = new SparkContext(conf)
    // $example on$
    // Load and parse the data file.
    val data = MLUtils.loadLibSVMFile(sc, "sample_libsvm_data.txt")
    // Split the data into training and test sets (30% held out for testing)
    val splits = data.randomSplit(Array(0.7, 0.3))
    val (trainingData, testData) = (splits(0), splits(1))

    // Train a RandomForest model.
    // Empty categoricalFeaturesInfo indicates all features are continuous.
    val numClasses = 2
    val categoricalFeaturesInfo = Map[Int, Int]()
    val numTrees = 3 // Use more in practice.
    val featureSubsetStrategy = "auto" // Let the algorithm choose.
    val impurity = "gini"
    val maxDepth = 4
    val maxBins = 32

    val model = RandomForest.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo,
      numTrees, featureSubsetStrategy, impurity, maxDepth, maxBins)

    // Evaluate model on test instances and compute test error
    val labelAndPreds = testData.map { point =>
      val prediction = model.predict(point.features)
      (point.label, prediction)
    }
    val testErr = labelAndPreds.filter(r => r._1 != r._2).count.toDouble / testData.count()
    println(s"Test Error = $testErr")
    println(s"Learned classification forest model:\n ${model.toDebugString}")

    // Save and load model
    model.save(sc, "target/tmp/myRandomForestClassificationModel")
    val sameModel = RandomForestModel.load(sc, "target/tmp/myRandomForestClassificationModel")
    // $example off$

    sc.stop()
  }
}

收到以下错误:

  

18/03/10 15:01:55 INFO BlockManager:已初始化的BlockManager:   BlockManagerId(驱动程序,192.168.42.97,34415,无)线程中的异常   “main”java.lang.ExceptionInInitializerError at   org.apache.spark.SparkContext.withScope(SparkContext.scala:692)at   org.apache.spark.SparkContext.textFile(SparkContext.scala:821)at   org.apache.spark.mllib.util.MLUtils $ .parseLibSVMFile(MLUtils.scala:101)     在   org.apache.spark.mllib.util.MLUtils $ .loadLibSVMFile(MLUtils.scala:76)     在   org.apache.spark.mllib.util.MLUtils $ .loadLibSVMFile(MLUtils.scala:159)     在   org.apache.spark.mllib.util.MLUtils $ .loadLibSVMFile(MLUtils.scala:167)     在   lib.RandomForestClassificationExample $。主要(RandomForestClassificationExample.scala:16)     在   lib.RandomForestClassificationExample.main(RandomForestClassificationExample.scala)   引起:com.fasterxml.jackson.databind.JsonMappingException:   不相容的杰克逊版本:2.7.8 at   com.fasterxml.jackson.module.scala.JacksonModule $ class.setupModule(JacksonModule.scala:64)     在   com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)     在   com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:730)     在   org.apache.spark.rdd.RDDOperationScope $(RDDOperationScope.scala:82)。     在   org.apache.spark.rdd.RDDOperationScope $(RDDOperationScope.scala)     ......还有8个

这是sample_libsvm_data.txt

注意:以上示例仅来自火花示例。

请指导我错误的地方。

0 个答案:

没有答案