假设我正在针对Mesos群集运行pyspark shell。我想只占用12个CPU核心。所以我这样推出它:
uu@r4:~$ pyspark --master mesos://e3.test:5050 --total-executor-cores 12
然后是常用的东西:
Python 2.7.13 |Anaconda 2.5.0 (64-bit)| (default, Dec 20 2016, 23:09:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/31 08:16:31 INFO SparkContext: Running Spark version 1.6.2
17/01/31 08:16:31 INFO SecurityManager: Changing view acls to: uu
17/01/31 08:16:31 INFO SecurityManager: Changing modify acls to: uu
17/01/31 08:16:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(uu); users with modify permissions: Set(uu)
17/01/31 08:16:31 INFO Utils: Successfully started service 'sparkDriver' on port 53336.
17/01/31 08:16:31 INFO Slf4jLogger: Slf4jLogger started
17/01/31 08:16:32 INFO Remoting: Starting remoting
17/01/31 08:16:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@r4.test:59860]
17/01/31 08:16:32 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 59860.
17/01/31 08:16:32 INFO SparkEnv: Registering MapOutputTracker
17/01/31 08:16:32 INFO SparkEnv: Registering BlockManagerMaster
17/01/31 08:16:32 INFO DiskBlockManager: Created local directory at /var/tmp/spark/blockmgr-6b16ff11-b0bc-4a71-82f5-c69a363c8c1a
17/01/31 08:16:32 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
17/01/31 08:16:32 INFO SparkEnv: Registering OutputCommitCoordinator
17/01/31 08:16:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/31 08:16:32 INFO SparkUI: Started SparkUI at http://r4.test:4040
I0131 08:16:32.582038 24965 sched.cpp:226] Version: 1.1.0
I0131 08:16:32.586931 24958 sched.cpp:330] New master detected at master@192.168.0.15:5050
I0131 08:16:32.587162 24958 sched.cpp:341] No credentials provided. Attempting to register without authentication
I0131 08:16:32.596922 24956 sched.cpp:743] Framework registered with 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Registered as framework ID 075ef8d0-de21-472d-8198-80805006b93d-0051
17/01/31 08:16:32 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 51135.
17/01/31 08:16:32 INFO NettyBlockTransferService: Server created on 51135
17/01/31 08:16:32 INFO BlockManagerMaster: Trying to register BlockManager
17/01/31 08:16:32 INFO BlockManagerMasterEndpoint: Registering block manager r4.test:51135 with 511.1 MB RAM, BlockManagerId(driver, r4.test, 51135)
17/01/31 08:16:32 INFO BlockManagerMaster: Registered BlockManager
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
17/01/31 08:16:32 INFO CoarseMesosSchedulerBackend: Mesos task 0 is now TASK_RUNNING
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Python version 2.7.13 (default, Dec 20 2016 23:09:15)
SparkContext available as sc, HiveContext available as sqlContext.
但最终只注册了一个遗嘱执行人:
>>> 17/01/31 08:16:35 INFO CoarseMesosSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (r5.test:42965) with ID 023af0f2-fc60-4d9d-a3db-301ab34764c9-S3
17/01/31 08:16:35 INFO BlockManagerMasterEndpoint: Registering block manager r5.test:33239 with 511.1 MB RAM, BlockManagerId(023af0f2-fc60-4d9d-a3db-301ab34764c9-S3, r5.test, 33239)
意味着整个Spark应用程序即将在单个节点上运行。这不是我想要的调度(主要是由于数据局部性考虑)。我期待的更像Spark独立设置方式:--total-executor-cores
在集群中或多或少地均匀分布。
有任何方法可以实现这一目标吗?提及执行程序/核心编号的剩余选项似乎没有任何影响(仅与独立和Yarn配置相关)。
为什么Spark与Mesos采用这种放置策略逐个填充节点而不是分发工作?
UPD:docs中提到的会话条目也不起作用:
pyspark --master mesos://e3.test:5050 --conf spark.executor.cores=2 --conf spark.cores.max=12
答案 0 :(得分:0)
version 1.6.2
是问题所在。在最新版本中,有一个选项spark.cores.max
可以限制每个执行程序的核心数。