从kafka写的HDFS:createBlockOutputStream Exception

时间:2018-05-23 09:37:20

标签: docker hadoop apache-kafka hdfs

我使用来自docker swarm的Hadoop和1个namenode以及3个datanode(在3台物理机器上)。 我还使用kafka和kafka connect + hdfs连接器以拼花格式将消息写入HDFS。

我能够使用HDFS客户端(hdfs put)将数据写入HDFS。 但是当kafka正在编写消息时,它会在一开始就起作用,然后如果失败并出现此错误:

org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.0.0.8:50010]
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
[2018-05-23 10:30:10,125] INFO Abandoning BP-468254989-172.17.0.2-1527063205150:blk_1073741825_1001 (org.apache.hadoop.hdfs.DFSClient:1265)
[2018-05-23 10:30:10,148] INFO Excluding datanode DatanodeInfoWithStorage[10.0.0.8:50010,DS-cd1c0b17-bebb-4379-a5e8-5de7ff7a7064,DISK] (org.apache.hadoop.hdfs.DFSClient:1269)
[2018-05-23 10:31:10,203] INFO Exception in createBlockOutputStream (org.apache.hadoop.hdfs.DFSClient:1368)
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.0.0.9:50010]
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:534)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)

然后,该过程不再可以访问数据节点:

[2018-05-23 10:32:10,316] WARN DataStreamer Exception (org.apache.hadoop.hdfs.DFSClient:557)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /topics/+tmp/test_hdfs/year=2018/month=05/day=23/hour=08/60e75c4c-9129-454f-aa87-6c3461b54445_tmp.parquet could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)

但是,如果我查看hadoop web管理控制台,所有节点似乎都已启动并且正常。

我已经检查了hdfs-site和" dfs.client.use.datanode.hostname" name在namenode和datanode上都设置为true。 hadoop配置文件中的所有ips都使用0.0.0.0地址定义。

我也试图格式化namenode,但错误再次发生。

问题可能是Kafka在HDFS中写得太快了,所以它压倒了吗?这很奇怪,因为我在较小的集群上尝试了相同的配置,即使在kafka消息的吞吐量很大的情况下也能正常工作。

你对这个问题的根源有什么看法吗?

由于

1 个答案:

答案 0 :(得分:0)

dfs.client.use.datanode.hostname=true还必须配置到客户端,并遵循日志堆栈:

  

java.nio.channels.SocketChannel [连接待处理的远程= / 10.0.0.9:50010]

我猜10.0.0.9是指私有网络IP;因此,似乎您的客户端中未在 hdfs-client.xml 中设置该属性。

您可以找到更多详细信息here