我在Create external with Partition
中创建了一个配置单元表我使用的是hive-0.7.1-cdh3u2
的hive版本。当我运行简单查询,即选择计数(*)时,我收到错误。
hive> select count(*) from test where dt='2012-01-30' and hr='17';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapred.reduce.tasks=<number>
Starting Job = job_201201311809_0006, Tracking URL = http://localhost:15030/jobdetails.jsp?jobid=job_201201311809_0006
Kill Command = /Users/balaji/svn/app/hadoop/hadoop-0.20.2-cdh3u2/bin/hadoop job -Dmapred.job.tracker=localhost:10012 -kill job_201201311809_0006
2012-01-31 21:20:58,074 Stage-1 map = 0%, reduce = 0%
2012-01-31 21:21:25,402 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201201311809_0006 with errors
并且jobtracker错误是
FAILED
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileRecordReader.initNextRecordReader(Hadoop20SShims.java:306)
at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileRecordReader.<init>(Hadoop20SShims.java:269)
at org.apache.hadoop.hive.shims.Hadoop20SShims$CombineFileInputFormatShim.getRecordReader(Hadoop20SShims.java:366)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:413)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:371)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
Caused by: java.lang.reflect.InvocationTargetException
at sun
有人可以帮忙。由于这个问题我完全被阻止了。谢谢!
答案 0 :(得分:3)
我发现了这个问题。蜂巢正在寻找Serde jar来在Hadoop中执行它。通过在hive-default.xml
中添加属性得到修复<property>
<name>hive.aux.jars.path</name>
<value>serde jar path in hadoop</value>
</property>
谢谢