使用join时Hive中的行异常

时间:2015-03-05 09:19:50

标签: hadoop hive hiveql

我在Hive Query上执行连接时遇到以下异常,并且在68%完成后,reducer挂起。

java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=1) {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
        at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)
        at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=1) {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
        at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
        ... 7 more
Caused by: org.apache.hadoop.

以下是我的查询和表格结构:

create table table_llv_N_C as select table_line_n_passed.chromosome_number,table_line_n_passed.position,table_line_c_passed.id from table_line_n_passed join table_line_c_passed on (table_line_n_passed.chromosome_number=table_line_c_passed.chromosome_number)

hive> desc table_line_n_passed;
OK
chromosome_number       string

position        int
id      string
ref     string
alt     string
quality double
filter  string
info    string
format  string
line6   string
Time taken: 0.854 seconds

为什么我会收到此错误,我该如何解决? 下面给出了完整的堆栈跟踪。

2015-03-09 10:19:09,347 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1797000000 rows
2015-03-09 10:19:09,919 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1798000000 rows
2015-03-09 10:19:09,919 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1798000000 rows
2015-03-09 10:19:10,495 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1799000000 rows
2015-03-09 10:19:10,495 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1799000000 rows
2015-03-09 10:19:11,069 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1800000000 rows
2015-03-09 10:19:11,069 INFO org.apache.hadoop.hive.ql.exec.SelectOperator: 7 forwarding 1800000000 rows
2015-03-09 10:19:11,644 INFO org.apache.hadoop.hive.ql.exec.JoinOperator: 6 forwarding 1801000000 rows

2015-03-09 10:19:11,644 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1801000000行    2015-03-09 10:19:12,229 INFO org.apache.hadoop.hive.ql.exec.JoinOperator:6转发1802000000行    2015-03-09 10:19:12,229 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1802000000行    2015-03-09 10:19:13,310 INFO org.apache.hadoop.hive.ql.exec.JoinOperator:6转发1803000000行    2015-03-09 10:19:13,310 INFO org.apache.hadoop.hive.ql.exec.SelectOperator:7转发1803000000行     2015-03-09 10:19:13,666 WARN org.apache.hadoop.hdfs.DFSClient:DataStreamer异常     org.apache.hadoop.ipc.RemoteException(java.io.IOException):File /tmp/hive-root/hive_2015-03-09_10-03-59_970_3646456754594156815-1/_task_tmp.-ext-10001/_tmp.000000_0只能是复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.ipc.Client.call(Client.java:1238)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1228)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:502)

2015-03-09 10:19:14,043 FATAL ExecReducer:org.apache.hadoop.hive.ql.metadata.HiveException:处理行时的Hive运行时错误(tag = 1){" key" :{" joinkey0":" 12"}"值" {" _col2":""} "别名":1}     在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)     在org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)     在org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)     在org.apache.hadoop.mapred.Child $ 4.run(Child.java:268)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.mapred.Child.main(Child.java:262) 引起:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:134)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more

引起:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10 -03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:620)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:742)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:745)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:847)
at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:109)
... 9 more

引起:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File /tmp/hive-root/hive_2015-03-09_10-03-59_970_3646456754594156815-1/_task_tmp.-ext-10001/ _tmp.000000_0只能复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.ipc.Client.call(Client.java:1238)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy10.addBlock(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at $Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1228)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1081)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:502)

2015-03-09 10:19:14,800 INFO org.apache.hadoop.mapred.TaskLogsTruncater:初始化日志'具有mapRetainSize = -1和reduceRetainSize = -1的截断器 2015-03-09 10:19:14,806 WARN org.apache.hadoop.mapred.Child:运行子进程出错 java.lang.RuntimeException:org.apache.hadoop.hive.ql.metadata.HiveException:处理行时的Hive运行时错误(tag = 1){" key":{" joinkey0" :" 12"}"值" {" _col2":""}"别名&#34 ;: 1}     在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)     在org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)     在org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)     在org.apache.hadoop.mapred.Child $ 4.run(Child.java:268)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.mapred.Child.main(Child.java:262) 引起:org.apache.hadoop.hive.ql.metadata.HiveException:处理行时的Hive运行时错误(tag = 1){" key":{" joinkey0":&# 34; 12"}"值" {" _col2":""}"别名":1}     在org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)     ......还有7个 引起:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10-03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.hive.ql.exec.JoinOperator.processOp(JoinOperator.java:134)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)
... 7 more

引起:org.apache.hadoop.hive.ql.metadata.HiveException:org.apache.hadoop.ipc.RemoteException(java.io.IOException):File / tmp / hive-root / hive_2015-03-09_10 -03-59_970_3646456754594156815-1 / _task_tmp.-ext-10001 / _tmp.000000_0只能复制到0个节点而不是minReplication(= 1)。有2个数据节点在运行,并且在此操作中不排除任何节点。     在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1361)     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2362)     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)     在org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1002)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1760)     在org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1756)     at java.security.AccessController.doPrivileged(Native Method)     在javax.security.auth.Subject.doAs(Subject.java:396)     在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1754)

at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:620)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:803)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:742)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.genUniqueJoinObject(CommonJoinOperator.java:745)
at org.apache.hadoop.hive.ql.exec.CommonJoinOperator.checkAndGenObject(CommonJoinOperator.java:847)

1 个答案:

答案 0 :(得分:2)

根本原因可能是HDFS集群中缺少磁盘空间,这是因为查询似乎仅在运行一段时间后才会失败并与堆栈跟踪中的此消息结合使用:

... could only be replicated to 0 nodes instead of minReplication (=1). There are 2 datanode(s) running and no node(s) are excluded in this operation.

当存在网络通信问题(例如,与数据节点失去通信)或者HDFS无法为写操作提供服务时,该消息似乎就会出现,因为没有可以找到带有空闲块的数据节点。由于您的查询成功启动,因此我倾向于排除网络问题;相反,您的Hive查询似乎没有尝试生成该表的磁盘空间。您可能需要检查群集上的当前使用情况,这可以通过Ambari(如果您已安装)等方式完成,或者通过命令行执行以下操作之一:

hdfs dfs -df -h

如果您正在运行旧版本,则可能类似于:

hadoop fs -df -h