如何修复INFO client.RMProxy:在Hadoop上hadoop1 / 10.10.24.227:8050连接到ResourceManager

时间:2019-02-18 12:46:35

标签: java hadoop yarn resourcemanager

我正在使用2台机器的服务器上设置新的hadoop环境。 当我启动dfs和yarn并运行jps时,我开始了所有必要的工作。但是当我运行一个简单的WordCount时,就遇到了一个问题,就是这个

INFO client.RMProxy: Connecting to ResourceManager at hadoop1/10.10.24.227:8050
19/02/18 06:14:17 INFO ipc.Client: Retrying connect to server: hadoop1/10.10.24.227:8050. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

当我检查yarn-resourcemanager.log时,我得到

2019-02-18 06:13:35,344 FATAL org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting ResourceManager
java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:717)
    at org.apache.hadoop.ipc.Server.start(Server.java:2441)
    at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.serviceStart(ClientRMService.java:239)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:584)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:972)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1013)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1009)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1009)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1049)
    at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
    at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1186)

我将Hadoop 2.7.3与Java JDK 8运行在ubuntu 16上,具有4GB RAM,10GB磁盘空间和1个核心。 我尝试过更改yarn-site.xml和mapred-site.xml上的值,但到目前为止没有任何效果。

这是我的hdfs站点

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/usr/local/hadoop/hadoopdata/hdfs/namenode</value>
        </property>
</configuration>

我的yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-service.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address</name>
                <value>hadoop1:8025</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address</name>
                <value>hadoop1:8030</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address</name>
                <value>hadoop1:8050</value>
        </property>
</configuration>

我的mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>hadoop1:10020</value>
        </property>
</configuration>

我的core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://hadoop1:9000</value>
        </property>
</configuration>

我希望我能找到导致这种情况的答案。谢谢!

0 个答案:

没有答案