前向参考扩展了值ssc的定义

时间:2016-09-28 05:35:55

标签: scala apache-spark

我正在编写Scala的Spark应用程序,我需要在创建streamingcontext时获取异常。但是我得到了“正向参考扩展了值ssc的定义”..我正在使用以下代码..

 def main(args: Array[String]): Unit = {

//the minremeberduration is set to read the previous files from the directory
//the kyroclasses serialization needs to be enabled for the filestream

//System.setProperty("hadoop.home.dir", "C:\\hadoop")



if (stopActiveContext) {
  StreamingContext.getActive.foreach {
    _.stop(stopSparkContext = false)
  }
}

val ssc: StreamingContext=
try {
  StreamingContext.getActiveOrCreate("/mapr/cellos-mapr/user/mbazarganigilani/checkpoints", creatingFunc) //, hadoopConfiguration, false)
  //val ssc=StreamingContext.getOrCreate("/mapr/cellos-mapr/user/mbazarganigilani/checkpoints", creatingFunc)
}
catch {
  case ex: Exception => {
    ssc.stop(true, true)
    null
  }
}

//val ssc = StreamingContext.getActiveOrCreate("s3n://probecheckpoints/checkpoints",creatingFunc,SparkHadoopUtil.get.conf.set("dsf","dfsdf"),)

//val ssc = StreamingContext.get

if (newContextCreated) {
  println("New context created from currently defined creating function")
} else {
  println("Existing context running or recovered from checkpoint, may not be running currently defined creating function")
}
// Start the streaming context in the background.
ssc.start()

// This is to ensure that we wait for some time before the background streaming job starts. This will put this cell on hold for 5 times the batchIntervalSeconds.
ssc.awaitTerminationOrTimeout(batchIntervalSeconds * 2 * 1000 * 1000)
}

}


// This is to ensure that we wait for some time before the background streaming job starts. This will put this cell on hold for 5 times the batchIntervalSeconds.
ssc.awaitTerminationOrTimeout(batchIntervalSeconds * 2 * 1000 * 1000)
}

我的方式是正确的还是我可以在Spark中更好地处理这个问题?

0 个答案:

没有答案