Spark / Mesos / Tasks丢失,奴隶被列入黑名单,执行者被删除

时间:2018-06-11 12:44:01

标签: scala apache-spark mesos spark-submit

我在Spark 2.2.0上使用Scala 2.11.11,在Mesos 1.4.2上进行SBT运行一个spark-submit作业。

我遇到任务丢失和执行人员未注册的问题。以下是症状:

MesosCoarseGrainedSchedulerBackend启动任务,直到达到spark.cores.max。例如,它启动了6个任务:

18/06/11 12:49:54 DEBUG MesosCoarseGrainedSchedulerBackend: Received 2 resource offers.
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Accepting offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585462 with attributes: Map() mem: 423417.0 cpu: 55.5 ports: List((1025,2180), (2182,3887), (3889,5049), (5052,5507), (5509,8079), (8082,8180), (8182,8792), (8794,9177), (9179,12396), (12398,16297), (16299,16839), (16841,18310), (18312,21795), (21797,22269), (22271,32000)).  Launching 2 Mesos tasks.
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 2 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 0 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Accepting offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585463 with attributes: Map() mem: 300665.0 cpu: 71.5 ports: List((1025,2180), (2182,2718), (2721,3887), (3889,5049), (5052,5455), (5457,8079), (8082,8180), (8182,8262), (8264,8558), (8560,8792), (8794,10231), (10233,16506), (16508,18593), (18595,32000)).  Launching 3 Mesos tasks.
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 4 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 3 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 1 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Received 2 resource offers.
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Accepting offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585464 with attributes: Map() mem: 423417.0 cpu: 55.5 ports: List((1025,2180), (2182,3887), (3889,5049), (5052,5507), (5509,8079), (8082,8180), (8182,8792), (8794,9177), (9179,12396), (12398,16297), (16299,16839), (16841,18310), (18312,21795), (21797,22269), (22271,32000)).  Launching 1 Mesos tasks.
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Launching Mesos task: 5 with mem: 11264.0 cpu: 20.0 ports: 
18/06/11 12:49:55 DEBUG MesosCoarseGrainedSchedulerBackend: Declining offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585465 with attributes: Map() mem: 300665.0 cpu: 71.5 port: List((1025,2180), (2182,2718), (2721,3887), (3889,5049), (5052,5455), (5457,8079), (8082,8180), (8182,8262), (8264,8558), (8560,8792), (8794,10231), (10233,16506), (16508,18593), (18595,32000)) for 120 seconds  (reason: reached spark.cores.max)

然后立即开始丢失任务和黑名单奴隶甚至认为我已设置spark.blacklist.enabled=false

18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 2 is now TASK_LOST
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 0 is now TASK_LOST
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Blacklisting Mesos slave a6031461-f185-424d-940e-b45fb64a2aaf-S0 due to too many failures; is Spark installed on it?
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 4 is now TASK_LOST
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 3 is now TASK_LOST
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Blacklisting Mesos slave a6031461-f185-424d-940e-b45fb64a2aaf-S1 due to too many failures; is Spark installed on it?
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 1 is now TASK_LOST
18/06/11 12:49:55 INFO MesosCoarseGrainedSchedulerBackend: Blacklisting Mesos slave a6031461-f185-424d-940e-b45fb64a2aaf-S1 due to too many failures; is Spark installed on it?

之后non-existent执行者被删除

18/06/11 12:49:56 DEBUG MesosCoarseGrainedSchedulerBackend: Received 2 resource offers.
18/06/11 12:49:56 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove executor 2 with reason Executor finished with state LOST
18/06/11 12:49:56 INFO BlockManagerMaster: Removal of executor 2 requested
18/06/11 12:49:56 DEBUG MesosCoarseGrainedSchedulerBackend: Declining offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585466 with attributes: Map() mem: 300665.0 cpu: 71.5 port: List((1025,2180), (2182,2718), (2721,3887), (3889,5049), (5052,5455), (5457,8079), (8082,8180), (8182,8262), (8264,8558), (8560,8792), (8794,10231), (10233,16506), (16508,18593), (18595,32000)) 
18/06/11 12:49:56 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 2
18/06/11 12:49:56 DEBUG MesosCoarseGrainedSchedulerBackend: Declining offer: a6031461-f185-424d-940e-b45fb64a2aaf-O585467 with attributes: Map() mem: 412153.0 cpu: 35.5 port: List((1025,2180), (2182,3887), (3889,5049), (5052,5507), (5509,8079), (8082,8180), (8182,8792), (8794,9177), (9179,12396), (12398,16297), (16299,16839), (16841,18310), (18312,21795), (21797,22269), (22271,32000)) 
18/06/11 12:49:56 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove executor 0 with reason Executor finished with state LOST
18/06/11 12:49:56 INFO BlockManagerMaster: Removal of executor 0 requested
18/06/11 12:49:56 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 0
18/06/11 12:49:56 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove executor 4 with reason Executor finished with state LOST
18/06/11 12:49:59 INFO BlockManagerMaster: Removal of executor 4 requested
18/06/11 12:49:59 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 4
18/06/11 12:49:59 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove executor 3 with reason Executor finished with state LOST
18/06/11 12:49:59 INFO BlockManagerMaster: Removal of executor 3 requested
18/06/11 12:49:59 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 3
18/06/11 12:49:59 DEBUG CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove executor 1 with reason Executor finished with state LOST
18/06/11 12:49:59 INFO BlockManagerMaster: Removal of executor 1 requested
18/06/11 12:49:59 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asked to remove non-existent executor 1
18/06/11 12:49:59 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 5 is now TASK_RUNNING
18/06/11 12:49:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
18/06/11 12:49:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 0 from BlockManagerMaster.
18/06/11 12:49:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 4 from BlockManagerMaster.
18/06/11 12:49:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 from BlockManagerMaster.
18/06/11 12:49:59 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.

但请注意,单个任务5没有丢失,执行程序5未被删除

18/06/11 12:49:59 INFO MesosCoarseGrainedSchedulerBackend: Mesos task 5 is now TASK_RUNNING
18/06/11 12:50:01 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (SlaveIp:46884) with ID 5
18/06/11 12:50:01 INFO BlockManagerMasterEndpoint: Registering block manager SpaveIP:32840 with 5.2 GB RAM, BlockManagerId(5, SlaveIP, 32840, None)

这是我的SparkSession设置:

val spark = SparkSession.builder
.config("spark.executor.cores", 20)
.config("spark.executor.memory", "10g")
.config("spark.sql.shuffle.partitions", numPartitionsShuffle)
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.config("spark.network.timeout", "1200s")
.config("spark.blacklist.enabled", false)
.config("spark.blacklist.maxFailedTaskPerExecutor", 100)
.config("spark.dynamicAllocation.enabled", false)
.getOrCreate()

这是我的spark-submit脚本

spark-submit \
  --class MyMainClass \
  --master mesos://masterIP:7077 \
  --total-executor-cores 120 \
  --driver-memory 200g \
  --deploy-mode cluster \
  --name MyMainClass \
  --conf "spark.shuffle.service.enabled=false" \
  --conf "spark.dynamicAllocation.enabled=false" \
  --conf "spark.blacklist.enabled=false" \
  --conf "spark.blacklist.maxFailedTaskPerExecutor=100" \
  --verbose \
  myJar-assembly-0.1.0-SNAPSHOT.jar

注意:

  • 我注意到,如果我休息一下,并且完成这项工作,那么它就会运转良好。但是,如果我试图快速连续或在我杀死前一个工作后,会出现上述问题。
  • 我的群集上有足够的资源来运行这些任务
  • 我正在复制SparkSession和spark-submit中的设置,因为config vs --conf的优先级似乎并不总是很清楚。
  • 以非动态模式运行非常重要。
  • 丢失的遗嘱执行人
  • 我将调试日志与基于Spark 2.0.1的旧的仍然活跃的退休集群安装进行了比较。完全相同的代码启动任务立即获得TASK_RUNNING状态。
  • 我的google和StackOverflow搜索没有提供任何有用的信息。
  • spark.blacklist.maxFailedTaskPerExecutorspark.blacklist.enabled的设置似乎无法正常工作
  • 相关未回答的问题[Spark on Mesos(DC / OS)在执行任何操作之前丢失任务 ](Spark on Mesos (DC/OS) loses tasks before doing anything

我完全迷失在这里。

问题:

  1. 您是否还需要更多信息来帮我诊断?
  2. 为什么这项工作一旦启动就会失去大部分任务?我看到了Task Reasons,但没有任何理由可以解释它。
  3. 为什么要说要求删除不存在的执行者
  4. 我应该在哪个方向看它?
  5. 是否与先前的工作被杀并且没有等待足够长的时间来启动下一个工作有关?

1 个答案:

答案 0 :(得分:0)

我正在回答我自己的问题:

我们发现我们的问题有两个。

  1. 主人和工人之间的一些未识别的通信/连接问题导致Mesos任务(执行者)丢失。日志中没有任何内容可以解释这个问题是什么。
  2. 每次工作人员至少丢失2个Mesos任务时,它会被列入黑名单。在Spark 2.2中,2的限制在代码中是硬编码的,不能更改。有关详细信息,请参阅此处:Blacklist is always active for MesosCoarseGrainedSchedulerBackend
  3. 结果:

    • 有时通讯问题没有发生,工作正常执行。
    • 大部分时间所有遗嘱执行人都在工作开始时就丢失了。通过在我们的集群中拥有2名工作人员,我们一次只能运行3个执行器。在作业开始时,所有执行程序(worker1上的2和worker2上的1)将丢失,但只有worker1将被列入黑名单,丢失的执行程序将在worker2上重新启动并继续运行而不会出现问题。

    <强>解决方案:

    我不确定这是否是这个问题的一般解决方案,但是我们有些盲目地搜索了调节不同mesos timeout机制的配置,我们在Mesos 1.4中发现了这个错误:

    Using a failoverTimeout of 0 with Mesos native scheduler client can result in infinite subscribe loop

    作为测试,我们设置了SparkSession config spark.mesos.driver.failoverTimeout=1.0。这似乎解决了我们的问题。我们不会在工作开始时遗失执行人员。