我在独立模式下运行spark-submit或pyspark时遇到问题,如下所示:
spark/bin/pyspark --master spark://<SPARK_IP>:<SPARK_PORT>
通常使用所有节点在UI中创建正在运行的Spark应用程序(至少在以前的版本中)。
出于某种原因,这样做只会在主节点上运行它,尽管UI表明所有节点都连接到主节点。从属节点上的日志中没有错误。任何人都知道什么可能出错?作为参考,我的spark-env.sh具有以下配置:
export HADOOP_CONF_DIR=/mnt/hadoop/etc/hadoop
export SPARK_PUBLIC_DNS=<PUBLIC_DNS>
export SPARK_MASTER_IP=<PRIVATE_DNS>
export SPARK_MASTER_PORT=7077
export SPARK_CLASSPATH=$SPARK_CLASSPATH:/mnt/hadoop/share/hadoop/tools/lib/*
export SPARK_JAVA_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_TMP_DIR="/mnt/persistent/hadoop"
export SPARK_MASTER_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_WORKER_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_DRIVER_MEMORY=5g
export SPARK_EXECUTOR_OPTS="-Djava.io.tmpdir=/mnt/persistent/hadoop"
export SPARK_EXECUTOR_INSTANCES=2
export SPARK_EXECUTOR_MEMORY=23g
以下是尝试启动PySpark后弹出的内容:
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
15/12/24 01:36:38 INFO spark.SparkContext: Running Spark version 1.5.2
15/12/24 01:36:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/24 01:36:38 WARN spark.SparkConf:
SPARK_JAVA_OPTS was detected (set to '-Djava.io.tmpdir=/mnt/persistent/hadoop').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with conf/spark-defaults.conf to set defaults for an application
- ./spark-submit with --driver-java-options to set -X options for a driver
- spark.executor.extraJavaOptions to set -X options for executors
- SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Djava.io.tmpdir=/mnt/persistent/hadoop' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Djava.io.tmpdir=/mnt/persistent/hadoop' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf:
SPARK_CLASSPATH was detected (set to ':/mnt/hadoop/share/hadoop/tools/lib/*').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/mnt/hadoop/share/hadoop/tools/lib/*' as a work-around.
15/12/24 01:36:38 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/mnt/hadoop/share/hadoop/tools/lib/*' as a work-around.
15/12/24 01:36:38 INFO spark.SecurityManager: Changing view acls to: ubuntu
15/12/24 01:36:38 INFO spark.SecurityManager: Changing modify acls to: ubuntu
15/12/24 01:36:38 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu)
15/12/24 01:36:39 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/12/24 01:36:39 INFO Remoting: Starting remoting
15/12/24 01:36:40 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@xx.xx.xx.xx:34065]
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'sparkDriver' on port 34065.
15/12/24 01:36:40 INFO spark.SparkEnv: Registering MapOutputTracker
15/12/24 01:36:40 INFO spark.SparkEnv: Registering BlockManagerMaster
15/12/24 01:36:40 INFO storage.DiskBlockManager: Created local directory at /mnt/persistent/hadoop/blockmgr-16d59ac7-dc2d-4cf7-ad52-91ff1035a86d
15/12/24 01:36:40 INFO storage.MemoryStore: MemoryStore started with capacity 2.6 GB
15/12/24 01:36:40 INFO spark.HttpFileServer: HTTP File server directory is /mnt/persistent/hadoop/spark-c6ea28f7-13dc-4799-aea7-0638cff35936/httpd-006916ff-7f84-4ad9-8fb5-bce471d73d5a
15/12/24 01:36:40 INFO spark.HttpServer: Starting HTTP Server
15/12/24 01:36:40 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/24 01:36:40 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:50882
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'HTTP file server' on port 50882.
15/12/24 01:36:40 INFO spark.SparkEnv: Registering OutputCommitCoordinator
15/12/24 01:36:40 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/12/24 01:36:40 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/12/24 01:36:40 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/12/24 01:36:40 INFO ui.SparkUI: Started SparkUI at http://xx.xx.xx.xx:4040
15/12/24 01:36:40 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/12/24 01:36:40 INFO client.AppClient$ClientEndpoint: Connecting to master spark://xx.xx.xx.xx:7077...
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151224013641-0001
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor added: app-20151224013641-0001/0 on worker-20151224013503-xx.xx.xx.xx-40801 (xx.xx.xx.xx:40801) with 4 cores
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20151224013641-0001/0 on hostPort xx.xx.xx.xx:40801 with 4 cores, 23.0 GB RAM
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor updated: app-20151224013641-0001/0 is now LOADING
15/12/24 01:36:41 INFO client.AppClient$ClientEndpoint: Executor updated: app-20151224013641-0001/0 is now RUNNING
15/12/24 01:36:41 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58297.
15/12/24 01:36:41 INFO netty.NettyBlockTransferService: Server created on 58297
15/12/24 01:36:41 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/12/24 01:36:41 INFO storage.BlockManagerMasterEndpoint: Registering block manager xx.xx.xx.xx:58297 with 2.6 GB RAM, BlockManagerId(driver, xx.xx.xx.xx, 58297)
15/12/24 01:36:41 INFO storage.BlockManagerMaster: Registered BlockManager
15/12/24 01:36:41 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.5.2
/_/
Using Python version 2.7.6 (default, Jun 22 2015 17:58:13)
SparkContext available as sc, HiveContext available as sqlContext.
>>> 15/12/24 01:36:44 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@xx.xx.xx.xx:38929/user/Executor#412940208]) with ID 0
15/12/24 01:36:44 INFO storage.BlockManagerMasterEndpoint: Registering block manager xx.xx.xx.xx:44977 with 11.9 GB RAM, BlockManagerId(0, xx.xx.xx.xx, 44977)
提前致谢, 千斤顶
答案 0 :(得分:2)
我有一个类似的问题,主人默默地忽略了一些奴隶。归结为以下几点:
如果应用程序需要某些从属设备无法满足其执行程序的某些资源,则会自动排除这些从属程序而不会发出警告。
例如,如果应用程序需要具有6个内核和11g RAM的执行程序,并且从服务器仅提供3个内核,则从属服务器不会从此应用程序获取任何任务。如果未在应用程序设置中指定核心数,则每个从站允许的最大数量将用于应用程序。但这并不适用于记忆。