I am using Hadoop hadoop-2.6.0, while starting hadoop services unable to start secondarynamenode, datanode, nodemanager services.
Getting Java.net.bind exceptions.
节点管理器:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
at org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.createServer(ResourceLocalizationService.java:356)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.serviceStar
的NameNode:
2017-10-10 23:58:07,872 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:50090
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:192)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:671)
DataNode:
java.net.BindException: Problem binding to [0.0.0.0:50010] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
在尝试使用netstat -ntpl命令时,以下端口已在使用中
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN -
tcp6 0 0 :::8040 :::* LISTEN -
请有人建议我如何杀死这些端口以解决此问题。
:~/hadoopinstall/hadoop-2.6.0$ jps
18255 org.eclipse.equinox.launcher_1.3.0.dist.jar
27492 RunJar
12387 Jps
11951 ResourceManager
11469 NameNode
答案 0 :(得分:1)
您提供的netstat
命令还会显示正在侦听该端口的进程的PID。
例如
[root@dn1 ~]# netstat -ntpl | grep 50010
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 1093/java
在上面的例子中,1093是绑定50010端口的java进程的PID。
您可以轻松检查正在运行的进程,如果您拥有正确的权限,也可以使用该命令终止进程。
kill -9 1093
答案 1 :(得分:0)
经常搜索/尝试后,我找到了解决方案来杀死没有pid的进程(端口)。
modify
现在我可以开始所有的hadoop服务了。
*******@127:~$ sudo fuser -k 50010/tcp
[sudo] password for muralee1857:
50010/tcp: 1514
*******@127:~$ sudo kill -9 $(lsof -t -i:50010)
*******@127:~$ sudo fuser -k 50010/tcp
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
50090/tcp: 2110
*******@127:~$ sudo kill -9 $(lsof -t -i:50090)
*******@127:~$ sudo fuser -k 50090/tcp
*******@127:~$ sudo fuser -k 8040/tcp
8040/tcp: 2304
*******@127:~$ sudo kill -9 $(lsof -t -i:8040)
*******@127:~$ sudo fuser -k 8040/tcp