在Spark Job Server中保留/共享RDD

时间:2016-02-26 21:50:17

标签: scala apache-spark spark-jobserver

我希望持久保存来自spark作业的RDD,以便所有后续作业都可以使用Spark Job Server。这是我尝试过的:

工作1:

package spark.jobserver

import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark._
import org.apache.spark.SparkContext._
import scala.util.Try

object FirstJob extends SparkJob with NamedRddSupport {
  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[4]").setAppName("FirstJob")
    val sc = new SparkContext(conf)
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is " + results)
  }

  override def validate(sc: SparkContext, config: Config): SparkJobValidation = SparkJobValid

  override def runJob(sc: SparkContext, config: Config): Any = {

    // the below variable is to be accessed by other jobs:
    val to_be_persisted : org.apache.spark.rdd.RDD[String] = sc.parallelize(Seq("some text"))

    this.namedRdds.update("resultsRDD", to_be_persisted)
    return to_be_persisted
  }
}

工作2:

package spark.jobserver

import com.typesafe.config.{Config, ConfigFactory}
import org.apache.spark._
import org.apache.spark.SparkContext._
import scala.util.Try


object NextJob extends SparkJob with NamedRddSupport {
  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[4]").setAppName("NextJob")
    val sc = new SparkContext(conf)
    val config = ConfigFactory.parseString("")
    val results = runJob(sc, config)
    println("Result is " + results)
  }

  override def validate(sc: SparkContext, config: Config): SparkJobValidation = SparkJobValid

  override def runJob(sc: SparkContext, config: Config): Any = {

    val rdd = this.namedRdds.get[(String, String)]("resultsRDD").get
    rdd
  }
}

我得到的错误是:

{
  "status": "ERROR",
  "result": {
    "message": "None.get",
    "errorClass": "java.util.NoSuchElementException",
    "stack": ["scala.None$.get(Option.scala:313)", "scala.None$.get(Option.scala:311)", "spark.jobserver.NextJob$.runJob(NextJob.scala:30)", "spark.jobserver.NextJob$.runJob(NextJob.scala:16)", "spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:278)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)", "scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)", "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)", "java.lang.Thread.run(Thread.java:745)"]
  }

请修改上述代码,以便to_be_persisted可访问。 感谢

修改

使用以下方法编译和打包scala源后创建了spark上下文:

curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&mem-per-node=512m'

使用:

调用FirstJob和NextJob
curl -d "" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.FirstJob&context=test-context&sync=true'

curl -d "" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.NextJob&context=test-context&sync=true'

1 个答案:

答案 0 :(得分:5)

这里似乎有两个问题:

  1. 如果你正在使用最新的spark-jobserver版本( 0.6.2-SNAPSHOT ),那么关于NamedObjects无法正常工作的漏洞就好了 - 似乎适合您的描述:https://github.com/spark-jobserver/spark-jobserver/issues/386

  2. 你也有一个小型的不匹配 - 在FirstJob中,你将RDD[String]保留,而在NextJob中你试图在NextJob中获取RDD[(String, String)] - 应阅读val rdd = this.namedRdds.get[String]("resultsRDD").get)。

  3. 我已经使用spark-jobserver版本 0.6.0 尝试了您的代码,并使用上述小型修补程序,它可以正常运行。