getExecutorMemoryStatus()。size()无法输出正确数量的执行器

时间:2018-07-14 18:49:32

标签: apache-spark pyspark slurm

简而言之,我需要Spark集群中执行者/工人的数量,但是使用sc._jsc.sc().getExecutorMemoryStatus().size()给我1个,而实际上有12个执行者。

具有更多详细信息,我试图确定执行程序的数量,并将该数量用作我要求Spark在其上分发RDD的分区的数量。我这样做是为了利用并行性,因为我的初始数据只是一个数字范围,但是随后每个数据都以rdd#foreach方法进行处理。该过程既需要内存,又需要大量计算,因此我希望最初的数字范围位于与执行者一样多的分区中,以允许所有执行者同时处理其大块。

this question中阅读注释,并在Scala的getExecutorMemoryStatus中看到documentation,建议的命令sc._jsc.sc().getExecutorMemoryStatus().size()似乎是合理的。但是由于某种原因,无论实际上有多少个执行器(上次运行时为12个),我都得到1的答案。

我在那里做错了吗?我打错方法了吗?错误的方式?

我正在一个独立的Spark集群上运行,该集群每次都为运行该应用程序而启动。

这是问题的最小示例:

from pyspark import SparkConf, SparkContext
import datetime


def print_debug(msg):
    dbg_identifier = 'dbg_et '
    print(dbg_identifier + str(datetime.datetime.now()) + ':  ' + msg)


print_debug('*****************before configuring sparkContext')
conf = SparkConf().setAppName("reproducing_bug_not_all_executors_working")
sc = SparkContext(conf=conf)
print_debug('*****************after configuring sparkContext')


def main():
    executors_num = sc._jsc.sc().getExecutorMemoryStatus().size()
    list_rdd = sc.parallelize([1, 2, 3, 4, 5], executors_num)
    print_debug('line before loop_a_lot. Number of partitions created={0}, 
        while number of executors is {1}'
          .format(list_rdd.getNumPartitions(), executors_num))
    list_rdd.foreach(loop_a_lot)
    print_debug('line after loop_a_lot')


def loop_a_lot(x):
    y = x
    print_debug('started working on item %d at ' % x + str(datetime.datetime.now()))
    for i in range(100000000):
        y = y*y/6+5
    print_debug('--------------------finished working on item %d at ' % x + str(datetime.datetime.now())
      + 'with a result: %.3f' % y)

if __name__ == "__main__":
    main()

并显示问题-在我最后一次运行它时,我在驱动程序的输出中得到了此信息(仅粘贴相关的部分,占位符而不是真实的ips和端口):

$> grep -E 'dbg_et|Worker:54 - Starting Spark worker' slurm-<job-num>.out
2018-07-14 20:48:26 INFO  Worker:54 - Starting Spark worker <ip1>:<port1> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:26 INFO  Worker:54 - Starting Spark worker <ip1>:<port2> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip2>:<port3> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip2>:<port4> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip3>:<port5> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip3>:<port6> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip4>:<port7> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip4>:<port8> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip5>:<port9> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip5>:<port10> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip6>:<port11> with 10 cores, 124.9 GB RAM
2018-07-14 20:48:29 INFO  Worker:54 - Starting Spark worker <ip6>:<port12> with 10 cores, 124.9 GB RAM
dbg_et 2018-07-14 20:48:37.044785:  *****************before configuring sparkContext
dbg_et 2018-07-14 20:48:38.708370:  *****************after configuring sparkContext
dbg_et 2018-07-14 20:48:39.046295:  line before loop_a_lot. Number of partitions created=1, while number of executors is 1
dbg_et 2018-07-14 20:50:11.181091:  line after loop_a_lot

并且在worker_dir中,Spark为运行创建了一个新目录,该目录包含12个子目录-仅其中一个(这次是目录5)具有脚本的副本,而没有-空输出,这对于执行器1的误读是有意义的,这使Spark仅在一个分区中创建rdd。这是该工作程序的完整输出(该输出实际上是stderr-我不知道为什么它不在应有的stdout中):

dbg_et 2018-07-14 20:48:41.806805:  started working on item 1 at 2018-07-14 20:48:41.806733
dbg_et 2018-07-14 20:48:59.710258:  --------------------finished working on item 1 at 2018-07-14 20:48:59.710198
with a result: inf
dbg_et 2018-07-14 20:48:59.710330:  started working on item 2 at 2018-07-14 20:48:59.710315
dbg_et 2018-07-14 20:49:17.367545:  --------------------finished working on item 2 at 2018-07-14 20:49:17.367483
with a result: inf
dbg_et 2018-07-14 20:49:17.367613:  started working on item 3 at 2018-07-14 20:49:17.367592
dbg_et 2018-07-14 20:49:35.382441:  --------------------finished working on item 3 at 2018-07-14 20:49:35.381597
with a result: inf
dbg_et 2018-07-14 20:49:35.382517:  started working on item 4 at 2018-07-14 20:49:35.382501
dbg_et 2018-07-14 20:49:53.227696:  --------------------finished working on item 4 at 2018-07-14 20:49:53.227615
with a result: inf
dbg_et 2018-07-14 20:49:53.227771:  started working on item 5 at 2018-07-14 20:49:53.227755
dbg_et 2018-07-14 20:50:11.128510:  --------------------finished working on item 5 at 2018-07-14 20:50:11.128452
with a result: inf

有人可以帮助我了解导致问题的原因吗?任何想法?可能是因为Slurm吗? (如您所见,我grep驱动程序的输出文件的方式-我在Slurm之上运行Spark,因为我可以访问的集群由它管理)

1 个答案:

答案 0 :(得分:0)

简短修复::在使用sleepdefaultParallelism(如果在应用程序开始时使用其中之一)之前要留出时间(例如,添加_jsc.sc().getExecutorMemoryStatus()命令)执行。

说明: 当只有一个执行程序时,启动时间似乎很短(我相信单个执行程序是驱动程序,在某些情况下,它被视为执行程序)。这就是为什么在主函数顶部使用sc._jsc.sc().getExecutorMemoryStatus()对我产生错误的数字的原因。 defaultParallelism(1)也发生了同样的情况。

我的怀疑是,在所有工作人员都连接到该驱动程序之前,该驱动程序开始以自身作为工作程序使用。它同意使用spark-submit

将以下代码提交到--total-executor-cores 12的事实
import time

conf = SparkConf().setAppName("app_name")
sc = SparkContext(conf=conf)
log4jLogger = sc._jvm.org.apache.log4j
log = log4jLogger.LogManager.getLogger("dbg_et")

log.warn('defaultParallelism={0}, and size of executorMemoryStatus={1}'.format(sc.defaultParallelism,
           sc._jsc.sc().getExecutorMemoryStatus().size()))
time.sleep(15)
log.warn('After 15 seconds: defaultParallelism={0}, and size of executorMemoryStatus={1}'
          .format(sc.defaultParallelism, 
                  sc._jsc.sc().getExecutorMemoryStatus().size()))
rdd_collected = (sc.parallelize([1, 2, 3, 4, 5] * 200, 
spark_context_holder.getParallelismAlternative()*3)
             .map(lambda x: (x, x*x) * 2)
             .map(lambda x: x[2] + x[1])
             )
log.warn('Made rdd with {0} partitioned. About to collect.'
          .format(rdd_collected.getNumPartitions()))
rdd_collected.collect()
log.warn('And after rdd operations: defaultParallelism={0}, and size of executorMemoryStatus={1}'
          .format(sc.defaultParallelism,
                  sc._jsc.sc().getExecutorMemoryStatus().size()))

给我下面的输出

> tail -n 4 slurm-<job number>.out
18/09/26 13:23:52 WARN dbg_et: defaultParallelism=2, and size of executorMemoryStatus=1
18/09/26 13:24:07 WARN dbg_et: After 15 seconds: defaultParallelism=12, and size of executorMemoryStatus=13
18/09/26 13:24:07 WARN dbg_et: Made rdd with 36 partitioned. About to collect.
18/09/26 13:24:11 WARN dbg_et: And after rdd operations: defaultParallelism=12, and size of executorMemoryStatus=13

在检查创建工作目录的时间时,我看到它恰好记录了defaultParallelismgetExecutorMemoryStatus().size()的正确值之后(2)。重要的是,这段时间是在记录这两个参数的错误值之后相当长的时间(〜10秒)(请参见上面带有“ defaultParallelism=2”的行的时间与这些目录的时间) '在下面创建)

 > ls -l --time-style=full-iso spark/worker_dir/app-20180926132351-0000/
 <permission user blah> 2018-09-26 13:24:08.909960000 +0300 0/
 <permission user blah> 2018-09-26 13:24:08.665098000 +0300 1/
 <permission user blah> 2018-09-26 13:24:08.912871000 +0300 10/
 <permission user blah> 2018-09-26 13:24:08.769355000 +0300 11/
 <permission user blah> 2018-09-26 13:24:08.931957000 +0300 2/
 <permission user blah> 2018-09-26 13:24:09.019684000 +0300 3/
 <permission user blah> 2018-09-26 13:24:09.138645000 +0300 4/
 <permission user blah> 2018-09-26 13:24:08.757164000 +0300 5/
 <permission user blah> 2018-09-26 13:24:08.996918000 +0300 6/
 <permission user blah> 2018-09-26 13:24:08.640369000 +0300 7/
 <permission user blah> 2018-09-26 13:24:08.846769000 +0300 8/
 <permission user blah> 2018-09-26 13:24:09.152162000 +0300 9/

(1)在开始使用getExecutorMemoryStatus()之前,我尝试过使用defaultParallelism,但是您一直使用2。现在我知道这是出于相同的原因。在独立群集上运行,如果驱动程序仅看到1个执行程序,则defaultParallelism = 2可以在spark.default.parallelism的{​​{3}}中看到。

(2)在创建目录之前,我不确定这些值的正确性如何,但我假设执行者的启动顺序是在创建目录之前将它们连接到驱动程序。