重试连接到服务器0.0.0.0/0.0.0.0:8032

时间:2020-09-05 04:32:53

标签: java apache-spark hadoop hdfs yarn

我在YARN上运行Spark。 hadoop的版本是3.1.1,spark的版本是2.3.2。 hadoop集群有3个节点。

我由用户A提交job1 spark。运行正常。

但是由用户B提交job2,这是错误的。

用户A和B在一台计算机上。

INFO RMProxy - Connecting to ResourceManager at /0.0.0.0:8032
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
INFO Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

0 个答案:

没有答案