当使用YARN 而不启用动态分配功能时,作业可以顺利运行。我使用的是Spark 1.4.0。
这就是我想要做的事情:
rdd = sc.parallelize(range(1000000))
rdd.first()
这是我在日志中得到的:
15/09/08 11:36:12 INFO SparkContext: Starting job: runJob at PythonRDD.scala:366
15/09/08 11:36:12 INFO DAGScheduler: Got job 0 (runJob at PythonRDD.scala:366) with 1 output partitions (allowLocal=true)
15/09/08 11:36:12 INFO DAGScheduler: Final stage: ResultStage 0(runJob at PythonRDD.scala:366)
15/09/08 11:36:12 INFO DAGScheduler: Parents of final stage: List()
15/09/08 11:36:12 INFO DAGScheduler: Missing parents: List()
15/09/08 11:36:12 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43), which has no missing parents
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(3560) called with curMem=0, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 3.5 KB, free 265.4 MB)
15/09/08 11:36:13 INFO MemoryStore: ensureFreeSpace(2241) called with curMem=3560, maxMem=278302556
15/09/08 11:36:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.2 KB, free 265.4 MB)
15/09/08 11:36:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.1.5.212:50079 (size: 2.2 KB, free: 265.4 MB)
15/09/08 11:36:13 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:874
15/09/08 11:36:13 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (PythonRDD[1] at RDD at PythonRDD.scala:43)
15/09/08 11:36:13 INFO YarnScheduler: Adding task set 0.0 with 1 tasks
15/09/08 11:36:14 INFO ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
15/09/08 11:36:28 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/09/08 11:36:43 WARN YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
...
此处是群集用户界面的屏幕截图:
有人能为我提供解决方案吗?即使是领导也会受到赞赏。
答案 0 :(得分:3)
我解决了问题,结果证明问题与资源可用性没有直接关系。要使用动态分配,纱线需要外部使用spark的shuffle服务而不是MapReduce的shuffle。为了更好地了解动态分配,建议您阅读this。