Spark多个上下文

时间:2015-09-28 16:07:07

标签: scala apache-spark

简而言之

EC2群集:1个主3个奴隶

Spark版本:1.3.1

我希望使用 spark.driver.allowMultipleContexts 选项,一个本地上下文(仅限主服务器)和一个集群(主服务器和从服务器)。

我得到这个堆栈跟踪错误(第29行是我调用初始化第二个sparkcontext的对象的地方):

fr.entry.Main.main(Main.scala)
   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1812)
   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1808)
   at scala.Option.foreach(Option.scala:236)
   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1808)
   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1795)
   at scala.Option.foreach(Option.scala:236)
   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1795)
   at org.apache.spark.SparkContext$.setActiveContext(SparkContext.scala:1847)
   at org.apache.spark.SparkContext.<init>(SparkContext.scala:1754)
   at fr.entry.cluster$.<init>(Main.scala:79)
   at fr.entry.cluster$.<clinit>(Main.scala)
   at fr.entry.Main$delayedInit$body.apply(Main.scala:29)
   at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
   at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
   at scala.App$$anonfun$main$1.apply(App.scala:71)
   at scala.App$$anonfun$main$1.apply(App.scala:71)
   at scala.collection.immutable.List.foreach(List.scala:318)
   at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
   at scala.App$class.main(App.scala:71)
   at fr.entry.Main$.main(Main.scala:14)
   at fr.entry.Main.main(Main.scala)
15/09/28 15:33:30 INFO AppClient$ClientActor: Executor updated: app-  20150928153330-0036/2 is now LOADING
15/09/28 15:33:30 INFO AppClient$ClientActor: Executor updated: app-    20150928153330-0036/0 is now RUNNING
15/09/28 15:33:30 INFO AppClient$ClientActor: Executor updated: app-20150928153330-0036/1 is now RUNNING
15/09/28 15:33:30 INFO SparkContext: Starting job: sum at Main.scala:29
15/09/28 15:33:30 INFO DAGScheduler: Got job 0 (sum at Main.scala:29) with 2 output partitions (allowLocal=false)
15/09/28 15:33:30 INFO DAGScheduler: Final stage: Stage 0(sum at Main.scala:29)
15/09/28 15:33:30 INFO DAGScheduler: Parents of final stage: List()
15/09/28 15:33:30 INFO DAGScheduler: Missing parents: List()
15/09/28 15:33:30 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[2] at numericRDDToDoubleRDDFunctions at Main.scala:29), which has no missing parents
15/09/28 15:33:30 INFO MemoryStore: ensureFreeSpace(2264) called with curMem=0, maxMem=55566516879
15/09/28 15:33:30 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.2 KB, free 51.8 GB)
15/09/28 15:33:30 INFO MemoryStore: ensureFreeSpace(1656) called with curMem=2264, maxMem=55566516879
15/09/28 15:33:30 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1656.0 B, free 51.8 GB)
15/09/28 15:33:30 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:40476 (size: 1656.0 B, free: 51.8 GB)
15/09/28 15:33:30 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/09/28 15:33:30 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:839
15/09/28 15:33:30 INFO AppClient$ClientActor: Executor updated: app-20150928153330-0036/2 is now RUNNING
15/09/28 15:33:30 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (MapPartitionsRDD[2] at numericRDDToDoubleRDDFunctions at Main.scala:29)
15/09/28 15:33:30 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/09/28 15:33:45 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/09/28 15:34:00 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

更多详情:

我想运行一个执行两项操作的程序。首先我有一个sparkContext本地(仅在master上),我做一个RDD并做一些操作。其次,我有一个第二个sparkContext初始化一个主服务器和3个从服务器,它们也生成一个RDD并执行一些操作。 因此,在第一种情况下,我想使用主机的16个核心,第二种情况,我想使用8个x3的从机。

简单示例:

val arr = Array(Array(1, 2, 3, 4, 5, 6, 7, 8), Array(1, 2, 3, 4, 5, 6, 7, 8))
println(local.sparkContext.makeRDD(arr).count()) 
println(cluster.sparkContext.makeRDD(arr).map(l => l.sum).sum)

我的两个SparkContexts:

object local {

  val project = "test"
  val version = "1.0"

  val sc = new SparkConf()
    .setMaster("local[16]")
    .setAppName("Local")
    .set("spark.local.dir", "/mnt")
    .setJars(Seq("target/scala-2.10/" + project + "-assembly-" + version + ".jar", "target/scala-2.10/" + project + "_2.10-" + version + "-tests.jar"))
    .setSparkHome("/root/spark")
    .set("spark.driver.allowMultipleContexts", "true")
    .set("spark.executor.memory", "45g")

  val sparkContext = new SparkContext(sc)
}

object cluster {

  val project = "test"
  val version = "1.0"

  val sc = new SparkConf()
   .setMaster(masterURL)  // ec2-XX-XXX-XXX-XX.compute-1.amazonaws.com
   .setAppName("Cluster")
   .set("spark.local.dir", "/mnt")
   .setJars(Seq("target/scala-2.10/" + project + "-assembly-" + version + ".jar", "target/scala-2.10/" + project + "_2.10-" + version + "-tests.jar") ++ otherJars)
   .setSparkHome("/root/spark")
   .set("spark.driver.allowMultipleContexts", "true")
   .set("spark.executor.memory", "35g")

  val sparkContext = new SparkContext(sc)
}

我该如何解决这个问题?

4 个答案:

答案 0 :(得分:11)

尽管存在配置选项spark.driver.allowMultipleContexts,但这是误导性的,因为不鼓励使用多个Spark上下文。此选项仅用于Spark内部测试,不应在用户程序中使用。在单个JVM中运行多个Spark上下文时,您可能会收到意外结果。

答案 1 :(得分:1)

If coordination is required between 2 programs, then it would be better to make it part of a single Spark application to take advantage of Sparks internal optimizations and to avoid unnecessary i/o.

Secondly, if 2 applications do not need to coordinate in any way, you can launch 2 separate applications. Since you are using Amazon EC2/EMR, you can use YARN as your resource manager without significant time investment as described here

答案 2 :(得分:1)

如果你有必要使用大量的Spark上下文,你可以打开特殊选项[MultipleContexts](1),但它只用于Spark内部测试,不应该在用户程序中使用。在单个JVM中运行多个Spark上下文时,您将获得意外行为[SPARK-2243](2)。 但是,可以在单独的JVM中创建不同的上下文,并在SparkConf级别管理上下文,这将最佳地适合可执行的作业。

看起来像这样: Mist creates every new Sparkcontext in its own JVM.

Spark上有一个中间件 - [Mist]。它管理Spark上下文和多个JVM,因此您可以在同一个集群上同时运行ETL管道,快速预测作业,ad-hoc Hive查询和Spark流应用程序等不同的作业。

1&GT; github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/SparkContextSuite.scala#L67

2 - ; issues.apache.org/jira/browse/SPARK-2243

答案 3 :(得分:0)

爪哇:

Public Function MyCode(ByVal txt As String) As String
Dim maxi As Double, db As Double
maxi = -9 ^ 9
arr = Split(txt, ",")
For Each a In arr
  If InStr(a, "=") Then
    a = Mid(a, InStr(a, "=") + 1)
    ar = Replace(a, ".", Format(0, "."))
    If IsNumeric(ar) Then
        db = ar
        If db > maxi Then maxi = db: ok = True
    End If
  End If
Next a
If ok = True Then
 MyCode = CStr(maxi)
End If
End Function

+

 .set("spark.driver.allowMultipleContexts", "true")

它对我有用。