我遇到了hive查询的问题。如果我尝试从hue接口启动 count(*)查询,但我得到了这样的异常:
15/01/23 15:06:42 ERROR operation.Operation: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:147)
at org.apache.hive.service.cli.operation.SQLOperation.access$000(SQLOperation.java:69)
at org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:200)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:502)
at org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
通过在hive Cli中启动相同的查询,我得到:
hive> select count(*) from tweets;
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.String.<init>(String.java:416)
at com.google.protobuf.LiteralByteString.toString(LiteralByteString.java:148)
at com.google.protobuf.ByteString.toStringUtf8(ByteString.java:572)
at org.apache.hadoop.hdfs.protocol.proto.HdfsProtos$ExtendedBlockProto.getPoolId(HdfsProtos.java:743)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:525)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:751)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1188)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1324)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1432)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.convert(PBHelper.java:1441)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:549)
at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy17.getListing(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1906)
at org.apache.hadoop.hdfs.DistributedFileSystem$15.<init>(DistributedFileSystem.java:742)
at org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:731)
at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:1664)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:300)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:217)
at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:336)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:302)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:435)
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:525)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:517)
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exceeded
我尝试检查jobTracker中的日志,但我得到
cat如何解决所有这些错误?
答案 0 :(得分:1)
我理解
的问题select count(*) from tweets;
问题是我将serde.jar放在某些节点主机上的错误目录中。所以我在hive cli / Hue中遇到查询错误。 CDH 4. *抛出&#34;未找到类别例外&#34;和CDH 5. *错误代码2。
但是jobTracker(Yarn)的问题仍然存在。
答案 1 :(得分:0)
它在HiveCLI中运行但不是Beeline的原因是因为在HiveCLI中没有强制执行用户/组安全性,因为Beeline将遵循某种形式的授权:Sentry / Ranger(如果已安装)或HDFS级别权限。