我正在尝试使用Hadoop配置集群(后来我将使用Yarn和Spark),但是我收到一条错误消息;
user1:masterPC:/opt/hadoop-3.1.2/etc$ jps
25777 Jps
user1:masterPC:/opt/hadoop-3.1.2/etc$ start-dfs.sh
Starting namenodes on [xxxxx.xxxxx.xx]
Starting datanodes
Starting secondary namenodes [xxxxx]
user1:masterPC:/opt/hadoop-3.1.2/etc$ jps
26148 NameNode
27159 Jps
26829 SecondaryNameNode
user1:masterPC:/opt/hadoop-3.1.2/etc$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers
user1:masterPC:/opt/hadoop-3.1.2/etc$ jps
26148 NameNode
27988 Jps
26829 SecondaryNameNode
当我尝试查看纱线节点时,出现错误消息:
user1:masterPC:/opt/hadoop-3.1.2/etc$ yarn node -list
2019-03-11 14:56:32,366 INFO client.RMProxy: Connecting to ResourceManager at xxxxx.xxxxx.xx/1xx.xx.xx.xx:8032
2019-03-11 14:56:33,708 INFO ipc.Client: Retrying connect to server: xxxxx.xxxxx.xx/1xx.xx.xx.xx:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2019-03-11 14:56:34,710 INFO ipc.Client: Retrying connect to server: xxxxx.xxxxx.xx/1xx.xx.xx.xx:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2019-03-11 14:56:35,711 INFO ipc.Client: Retrying connect to server: xxxxx.xxxxx.xx/1xx.xx.xx.xx:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
我该如何解决连接问题,谢谢。