我在SO上阅读了有关此主题的许多问题,并且编写了一个谦虚的bash脚本来快速获得这些值。
创建脚本的主要来源是:
该脚本包含以下内容:
# fixed values
CORES_PER_EXECUTOR=5 # (for good HDFS throughput) --executor-cores
HADOOP_DAEMONS_CORE=1
HADOOP_DAEMONS_RAM=1
# Example values
# This information can be obtained using the `lscpu` command
total_nodes_in_cluster=10 # `CPU(s):` field of lscpu
total_cores_per_node=16 # `Core(s) per socket:` field of lscpu
total_ram_per_node=64 # using `free -h` command
available_cores_per_node=$((total_cores_per_node - HADOOP_DAEMONS_CORE))
available_cores_in_cluster=$((available_cores_per_node * total_nodes_in_cluster))
available_executors=$((available_cores_in_cluster / CORES_PER_EXECUTOR))
num_of_executors=$((available_executors - 1 )) # Leaving 1 executor for ApplicationManager
num_of_executors_per_node=$((available_executors / total_nodes_in_cluster))
mem_per_executor=$((total_ram_per_node / num_of_executors_per_node))
# Counting off heap overhead = 7% of `mem_per_executor`GB:
# TODO "Counting off heap overhead = 7% of 21GB = 3GB. ???
seven_percent=$((mem_per_executor / 7))
executor_memory=$((mem_per_executor - seven_percent))
echo -e "The command will contains:\n spark-submit --class <CLASS_NAME> --num-executors ${num_of_executors} --executor-cores ${CORES_PER_EXECUTOR} --executor-memory ${executor_memory}G ...."
我想知道:
mem_per_executor
GB的7%:”部分吗?我的意思是,我做了数学运算,但是我不理解背后的想法。spark-submit
吗? 谢谢!