我有一个9节点的Hadoop集群,运行带有以下Yarn角色的MapR:
每当我通过pyspark提交Spark工作时,我会看到以下内容:
yarn node -list
17/04/26 23:40:26 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to hadoop-n1/XXX.XXX.XX.XX:8032
Total Nodes:8
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
hadoop-d8:52118 RUNNING hadoop-d8:8042 4
hadoop-d5:39108 RUNNING hadoop-d5:8042 0
hadoop-d2:42471 RUNNING hadoop-d2:8042 0
hadoop-d4:36191 RUNNING hadoop-d4:8042 0
hadoop-d3:53476 RUNNING hadoop-d3:8042 0
hadoop-d6:52497 RUNNING hadoop-d6:8042 4
hadoop-d1:59887 RUNNING hadoop-d1:8042 0
hadoop-d7:51878 RUNNING hadoop-d7:8042 4
因此始终使用d6
,d7
和d8
(并始终使用4个容器),其他节点从不使用。我已将spark.dynamicAllocation.enabled
设置为True
,这没有任何区别。我已经尝试重新启动所有Yarn流程,再次没有运气。 如何让我的作业在所有节点上运行?
我在Ubuntu 14.04上运行Hadoop版本2.7.0,Spark版本2.0.1和MapR版本5.2。 Yarn资源计算器设置为DiskBasedResourceCalculator
,使用内存,CPU和磁盘。
编辑:根据评论中建议的this回答,我更改为DominantResourceCalculator
并将spark.executor.instances
设置为0.仍然没有变化。如果我查看其中一个未使用的节点,我看到:
% yarn node -status hadoop-d7:35731
17/04/27 16:01:19 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to hadoop-n1/ Node Report :
Node-Id : hadoop-d7:35731
Rack : /default-rack
Node-State : RUNNING
Node-Http-Address : hadoop-d7:8042
Last-Health-Update : Thu 27/Apr/17 03:59:29:475EDT
Health-Report :
Containers : 4
Memory-Used : 32768MB
Memory-Capacity : 97988MB
CPU-Used : 20 vcores
CPU-Capacity : 20 vcores
Node-Labels :
而良好的节点显示
% yarn node -status hadoop-d2:37314
17/04/27 16:01:05 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to hadoop-n1/
Node Report :
Node-Id : hadoop-d2:37314
Rack : /default-rack
Node-State : RUNNING
Node-Http-Address : hadoop-d2:8042
Last-Health-Update : Thu 27/Apr/17 03:59:32:255EDT
Health-Report :
Containers : 0
Memory-Used : 0MB
Memory-Capacity : 70189MB
CPU-Used : 0 vcores
CPU-Capacity : 4 vcores
Node-Labels :