I'm using 40 r4.2xlarge slaves and one master with the same type host. r4.2xlarge has 8 cores with 61GB Memory.
Given settings are:
When observing a job running with this cluster in its Ganglia, overall cpu usage is around 30% only. and its resource manager "Aggregated Metrics by Executor" table shows only two executors per slave node.
Why does this cluster run only two executors per slave node even with 280 spark.executor.instances?
---- UPDATE ----
I found the yarn-site.xml under /etc/hadoop/conf.empty
答案 0 :(得分:1)
If you are running job on the YARN, the number of executors is not only allocate by this parameter, but a number that depends on the some configuration parameters in the YARN. Possibly parameters are:
yarn.scheduler.maximum-allocation-mb
yarn.scheduler.maximum-allocation-vcores
yarn.nodemanager.resource.cpu-vcores
yarn.nodemanager.resource.memory-mb
Please check that parameters in yarn-site.xml