错误:java.net.NoRouteToHostException没有到主机的路由

时间:2016-10-20 10:35:56

标签: hadoop hive yarn

我在hive中运行select * from customers,我得到了结果。 现在,当我运行select count(*) customers时,作业状态失败。在JobHistory中,我发现了4张失败的地图。 在地图日志文件中我有:

2016-10-19 12:47:09,725 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-10-19 12:47:09,786 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-10-19 12:47:09,786 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2016-10-19 12:47:09,796 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2016-10-19 12:47:09,796 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1476893269614_0006, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@18aabe9c)
2016-10-19 12:47:09,878 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2016-10-19 12:47:29,958 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 0 time(s); maxRetries=45
2016-10-19 12:47:30,961 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:31,962 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:32,963 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:36,971 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:37,975 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:38,976 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:46,992 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:47,993 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:48,994 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:50,999 INFO [main] org.apache.hadoop.ipc.Client: Retrying connect to server: slave1/192.168.1.33:37159. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-10-19 12:47:51,002 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.net.NoRouteToHostException: No Route to Host from  master1/192.168.1.30 to slave1:37159 failed on socket timeout exception: java.net.NoRouteToHostException: Aucun chemin d'accès pour atteindre l'hôte cible; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:757)
    at org.apache.hadoop.ipc.Client.call(Client.java:1475)
    at org.apache.hadoop.ipc.Client.call(Client.java:1408)
    at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:243)
    at com.sun.proxy.$Proxy9.getTask(Unknown Source)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:132)
Caused by: java.net.NoRouteToHostException: Aucun chemin d'accès pour atteindre l'hôte cible
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:614)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1524)
    at org.apache.hadoop.ipc.Client.call(Client.java:1447)
    ... 4 more

在Clouedra Manager中主机> slave1>处理> YARN Nodemanager> LogFile 我发现了两个问题:

WARN org.apache.hadoop.hdfs.BlockReaderFactory:

I/O error constructing remote block reader.
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467)
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942)
    at java.io.DataInputStream.read(DataInputStream.java:100)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)  

WARN org.apache.hadoop.hdfs.DFSClient:

Failed to connect to /192.168.1.30:50010 for block, add to deadNodes and continue. java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539
java.io.IOException: Got error for OP_READ_BLOCK, status=ERROR, self=/192.168.1.33:56208, remote=/192.168.1.30:50010, for file /user/admin/.staging/job_1476893269614_0001/libjars/hive-hbase-handler-1.1.0-cdh5.8.2.jar, for pool BP-1641388066-192.168.1.30-1476615377122 block 1073751347_10539
    at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:467)
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:432)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:881)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:759)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:376)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:662)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:942)
    at java.io.DataInputStream.read(DataInputStream.java:100)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
    at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
    at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
    at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:265)
    at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
    at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
    at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

请帮助。我已经和这个问题坐了很长时间了。谢谢!

2 个答案:

答案 0 :(得分:0)

您从第一个查询中获得结果:从客户中选择*,因为Hive不使用map reduce来获得结果

你确定你的hadoop配置吗? 您是否配置了主机文件?

How to configure hosts file for Hadoop ecosystem

答案 1 :(得分:-1)

检查您的主机文件和主机文件 它应该与其他方面相匹配,它会像你一样出错

sudo gedit /etc/hosts
======
hosts
======
127.0.0.1   localhost
127.0.0.1   orienit


sudo gedit /etc/hostname

hostname
========
orienit