将工作程序添加到Spark Standalone CDH5.3

时间:2015-04-03 21:42:25

标签: apache-spark cloudera cloudera-cdh cloudera-quickstart-vm

我在VM上运行了cloudera cdh5.3快速入门。我遇到运行Spark的问题。我已经完成了这些步骤http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_spark_configure ....并运行了exapmle这个词并且它有效。但是当我去主人(quickstart.cloudera:18080)时,它没有工作人员核心= 0,内存= 0 ......当我去(quickstart.cloudera:18081)时,有一个工人。我的问题是如何添加工人?我应该在导出STANDALONE_SPARK_MASTER_HOST中输入什么?

这是spark-env.sh:

#Change the following to specify a real cluster's Master host
export STANDALONE_SPARK_MASTER_HOST=worker-20150402201049-10.0.2.15-7078
export SPARK_MASTER_IP=$STANDALONE_SPARK_MASTER_HOST
### Let's run everything with JVM runtime, instead of Scala
export SPARK_LAUNCH_WITH_SCALA=0
export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib
export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib
export SPARK_MASTER_WEBUI_PORT=18080
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_PORT=7078
export SPARK_WORKER_WEBUI_PORT=18081
export SPARK_WORKER_DIR=/var/run/spark/work
export SPARK_LOG_DIR=/var/log/spark
export SPARK_PID_DIR='/var/run/spark/'
if [ -n "$HADOOP_HOME" ]; then
export LD_LIBRARY_PATH=:/lib/native
fi
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-etc/hadoop/conf}
### Comment above 2 lines and uncomment the following if
### you want to run with scala version, that is included with the package
#export SCALA_HOME=${SCALA_HOME:-/usr/lib/spark/scala}
#export PATH=$PATH:$SCALA_HOME/bin

谢谢

1 个答案:

答案 0 :(得分:0)

export STANDALONE_SPARK_MASTER_HOST=10.0.2.15添加到spark-env.sh,以便主人和工作人员就同一主机地址达成一致。