我在yarn上运行Spark 2.2.0,尝试提交python文件backtest.py
,并将所有项目文件压缩到prediction.zip
。请参阅下面的spark submit命令。
问题是Spark无法找到我的一个模块。 我错过了什么?
HADOOP_CONF_DIR="/etc/hive/conf.cloudera.hive" \
SPARK_HOME="/opt/spark/spark-2.2.0-bin-hadoop2.7" \
PYSPARK_PYTHON="/opt/anaconda/bin/python" \
PYSPARK_DRIVER_PYTHON="/opt/anaconda/bin/python" \
sudo -u hdfs \
/opt/spark/spark-2.2.0-bin-hadoop2.7/bin/spark-submit \
--master yarn \
--conf "spark.sql.shuffle.partitions=2001" \
--conf "spark.executorEnv.PYTHONHASHSEED=0" \
--deploy-mode cluster \
--master yarn \
--py-files /home/gals/prediction.zip \
/home/gals/parent/prediction/backtesting/backtest.py