Spark工作节点超时

时间:2017-10-01 19:54:04

标签: scala hadoop apache-spark

当我使用sbt run运行我的Spark应用程序时,配置指向远程群集的主服务器,工作人员不会执行任何有用的操作,并且会在sbt run日志中重复打印以下警告。

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

这就是我的spark配置的样子:

@transient lazy val conf: SparkConf = new SparkConf()
    .setMaster("spark://master-ip:7077")
    .setAppName("HelloWorld")
    .set("spark.executor.memory", "1g")
    .set("spark.driver.memory", "12g")

@transient lazy val sc: SparkContext = new SparkContext(conf)

val lines   = sc.textFile("hdfs://master-public-dns:9000/test/1000.csv")

我知道当群集配置错误且工作人员没有资源或者没有首先启动时,通常会出现此警告。但是,根据我的Spark UI(在master-ip:8080上),工作节点似乎还有足够的RAM和cpu内核,他们甚至尝试执行我的应用程序,但是他们退出并将其保留在stderr日志中:

INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; 
users  with view permissions: Set(ubuntu, myuser); 
groups with view permissions: Set(); users  with modify permissions: Set(ubuntu, myuser); groups with modify permissions: Set()

Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1713)
...
Caused by: java.util.concurrent.TimeoutException: Cannot receive any reply from 192.168.0.11:35996 in 120 seconds
... 8 more
ERROR RpcOutboxMessage: Ask timeout before connecting successfully

有什么想法吗?

1 个答案:

答案 0 :(得分:-1)

  

无法在120秒内收到192.168.0.11:35996的任何回复

你可以从工作人员telnet到这个IP端口,也许你的驱动程序机器有多个网络接口,尝试在$ SPARK_HOME / conf / spark-env.sh中设置SPARK_LOCAL_IP