我在kubernetes中运行了一个hadoop集群,有4个日志节点和2个名字节点。有时,我的datanode无法注册到名称节点。
17/06/08 07:45:32 INFO datanode.DataNode: Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 beginning handshake with NN
17/06/08 07:45:32 ERROR datanode.DataNode: Initialization failed for Block pool BP-541956668-10.100.81.42-1496827795971 (Datanode Uuid null) service to hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=b1babba6-9a6f-40dc-933b-08885cbd358e, infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-bceaa23f-ba3d-4749-a542-74cda1e82e07;nsid=177502984;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:863)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4529)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1279)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:95)
at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:28539)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
它说:
hadoop-namenode-0.myhadoopcluster/10.100.81.42:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.100.9.45, hostname=10.100.9.45)
但是,我可以在datanode和namenode中ping hadoop-namenode-0.myhadoopcluster
,10.100.81.42
,10.100.9.45
。
/etc/hosts
:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.100.9.45 hadoop-datanode-0.myhadoopcluster.default.svc.cluster.local hadoop-datanode-0
namenode中的 /etc/hosts
:
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.100.81.42 hadoop-namenode-0.myhadoopcluster.default.svc.cluster.local hadoop-namenode-0
我已在dfs.namenode.datanode.registration.ip-hostname-check
false
设为hdfs-site.xml
我猜问题可能与dns有关。在其他类似的问题中,hadoop没有部署在kubernetes或docker容器中,所以我发布了这个。请不要将其标记为重复...
答案 0 :(得分:1)
在我的情况下,我还在namenode和datanode中包含了三个配置:
dfs.namenode.datanode.registratin.io-hostname-check: false
dfs.client.use.datanode.hostname: false
dfs.datanode.use.datanode.hostname: false
答案 1 :(得分:0)
我希望你现在能找到解决这个问题的方法。 我上周遇到了类似的问题,但我的群集设置在不同的环境中,但问题的背景是相同的。
基本上,如果群集使用DNS解析器,则需要设置反向DNS查找以解决此问题,然后需要在DNS服务器级别设置或者如果名称节点正在查看/ etc / hosts找到数据节点的文件然后需要有数据节点的任何条目。
我在Hortonworks社区论坛帖子中更新了一个旧问题,链接如下: https://community.hortonworks.com/questions/24320/datanode-denied-communication-with-namenode.html?childToView=135321#answer-135321