Hadoop MapReduce无法连接到ResourceManager

时间:2017-06-07 03:26:32

标签: hadoop mapreduce hdfs hadoop3

我正在尝试使用单节点群集(Psuedo-distributed)设置Hadoop并使用the apache guide来执行此操作。现在我正在尝试运行MapReduce作业并使用它提供bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha3.jar grep input output 'dfs[a-z]+'

的示例
hadoop@hadoop:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-alpha3.jar grep input output 'dfs[a-z]+'
xxxx-xx-xx xx:xx:xx,xxx INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
...
xxxx-xx-xx xx:xx:xx,xxx INFO ipc.Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
xxxx-xx-xx xx:xx:xx,xxx WARN ipc.Client: Failed to connect to server: 0.0.0.0/0.0.0.0:8032: retries get failed due to exceeded maximum allowed retries number: 10
java.net.ConnectException: Connection refused
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)

在线查看此问题,其他所有遇到此问题的人似乎都使用YARN而不是MapReduce。我的hdfs-site.xml与指南中提到的相同:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration> 

我跑了jps虽然我不知道我在找什么:

hadoop@hadoop:/usr/local/hadoop$ jps
9860 DataNode
10075 SecondaryNameNode
9708 NameNode
11021 Jps

感谢任何帮助。

编辑:我调查了hadoop-hadoop-resourcemanager-hadoop.log,发现了这个:

xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8032: starting
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Transitioned to active state
xxxx-xx-xx xx:xx:xx,xxx INFO org.eclipse.jetty.util.log: Logging initialized @7307ms
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context cluster
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context logs
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter RMAuthenticationFilter (class=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to context static
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: adding path spec: /cluster/*
xxxx-xx-xx xx:xx:xx,xxx INFO org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
xxxx-xx-xx xx:xx:xx,xxx FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.ExceptionInInitializerError
    at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
    at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
    ...

Edit2:如果有帮助的话,这是我的yarn-site.xml:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
</configuration>

1 个答案:

答案 0 :(得分:1)

我使用的是Java 9,目前还没有支持Java 9的Hadoop。 https://issues.apache.org/jira/browse/HADOOP-11123