How to pass custom SPARK_CONF_DIR to slaves in standalone mode

时间:2019-04-17 00:27:54

标签: apache-spark

I am in the process of installing Spark in a shared cluster environment. We've decided to go with spark standalone mode, and are using the "start-all.sh" command included in sbin to launch the Spark workers. Due to the shared architecture of the cluster, SPARK_HOME is in a common directory not writeable by users. Therefore, we're creating "run" directories in the user's scratch, into which SPARK_CONF_DIR, log directory, and work directories can be pointed.

The problem is that SPARK_CONF_DIR is never set on the worker nodes, so they default to $SPARK_HOME/conf, which has only the templates. What I want to do is pass through SPARK_CONF_DIR from the master node to the slave nodes. I've identified a solution, but it requires a patch to sbin/start-slaves.sh:

sbin/start_slaves.sh

46c46
< "${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; export SPARK_CONF_DIR=${SPARK_CONF_DIR} \; "$SPARK_HOME/sbin/start-slave.sh" "spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT"
---
> "${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; "${SPARK_HOME}/sbin/start-slave.sh" "spark://$SPARK_MASTER_HOST:$SPARK_MASTER_PORT"

Are there are any better solutions here that do not require a patch to the Spark source code?

One solution, of course, would be to copy and rename start-all.sh and start-slaves.sh and use those instead of sbin/start-all.sh. But is there anything more elegant?

Thank you for your time.

1 个答案:

答案 0 :(得分:0)

If you want to run standalone mode, you can try to setup SPARK_CONF_DIR on your program. Take pyspark for example:

import os
from pyspark.sql import SparkSession

os.environ["SPARK_CONF_DIR"] = "/path/to/configs/conf1"
spark  = SparkSession.builder.getOrCreate()