在CentOS上的Hadoop datanode上打开套接字连接

时间:2012-04-10 12:20:36

标签: sockets hadoop centos cloudera

我正在我的centos 6.2.64机器上运行一个示例hadoop作业进行调试,

hadoop jar hadoop-examples-0.20.2-cdh3u3.jar randomtextwriter o

并且似乎在作业完成后,仍然保留与datanode的连接。

java       8979 username   51u     IPv6          326596025        0t0       TCP localhost:50010->localhost:56126 (ESTABLISHED)
java       8979 username   54u     IPv6          326621990        0t0       TCP localhost:50010->localhost:56394 (ESTABLISHED)
java       8979 username   59u     IPv6          326578719        0t0       TCP *:50010 (LISTEN)
java       8979 username   75u     IPv6          326596390        0t0       TCP localhost:50010->localhost:56131 (ESTABLISHED)
java       8979 username   84u     IPv6          326621621        0t0       TCP localhost:50010->localhost:56388 (ESTABLISHED)
java       8979 username   85u     IPv6          326622171        0t0       TCP localhost:50010->localhost:56395 (ESTABLISHED)
java       9276 username   77u     IPv6          326621714        0t0       TCP localhost:56388->localhost:50010 (ESTABLISHED)
java       9276 username   78u     IPv6          326596118        0t0       TCP localhost:56126->localhost:50010 (ESTABLISHED)
java       9408 username   75u     IPv6          326596482        0t0       TCP localhost:56131->localhost:50010 (ESTABLISHED)
java       9408 username   76u     IPv6          326622170        0t0       TCP localhost:56394->localhost:50010 (ESTABLISHED)
java       9408 username   77u     IPv6          326622930        0t0       TCP localhost:56395->localhost:50010 (ESTABLISHED)

最终我在一段时间后在datanode日志中收到此错误。

2012-04-12 15:56:29,151 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-591618896-176.9.25.36-50010-1333654003291, infoPort=50075, ipcPort=50020):DataXceiver
java.io.FileNotFoundException: /tmp/hadoop-serendio/dfs/data/current/subdir4/blk_-4401902756916730461_31251.meta (Too many open files)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:137)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getMetaDataInputStream(FSDataset.java:996)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:125)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:258)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)

这会导致生产系统出现问题,即数据节点耗尽xcievers。 在我的Ubuntu开发框中似乎没有发生这种行为。我们正在使用cloudera hadoop-0.20.2-cdh3u3。

解决此问题的任何指示?

1 个答案:

答案 0 :(得分:1)

如果尚未指定,请添加hdfs-site.xml:

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

defalut是256我想......

这个形式类型计算你需要多少xciever来避免这样的错误...

 # of xcievers = (( # of storfiles + # of regions * 4 + # of regioServer * 2 ) / # of datanodes)+reserves(20%)