当我运行简单的1记录查询时,我相信它正在寻找本地系统中不存在的表的源路径。我没有在集群模式下遇到此问题。你能帮忙吗?
环境:0.10 cdh3 设置hive.exec.mode.local.auto = true
hive(默认)>从双重选择'test';
自动选择仅本地模式进行查询 总MapReduce作业= 1 从1开始工作1 减少任务的数量设置为0,因为没有减少运算符 执行日志:/tmp/dm_hdp_dev_batch/dm_hdp_dev_batch_20130629230606_31b8a7bc-9618-4090-99ab-179f576ff5ae.log java.io.FileNotFoundException:文件不存在:/ tmp / dm_hdp_dev_batch / hive_2013-06-29_23-06-11_544_4259930697398507763 / -mr-10000/1 / emptyFile 在org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:562) at org.apache.hadoop.mapred.lib.CombineFileInputFormat $ OneFileInfo。(CombineFileInputFormat.java:462) 在org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256) at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212) 在org.apache.hadoop.hive.shims.HadoopShimsSecure $ CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392) 在org.apache.hadoop.hive.shims.HadoopShimsSecure $ CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358) at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387) 在org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:983) 在org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:975) 在org.apache.hadoop.mapred.JobClient.access $ 500(JobClient.java:170) 在org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:886) 在org.apache.hadoop.mapred.JobClient $ 2.run(JobClient.java:839) at java.security.AccessController.doPrivileged(Native Method) 在javax.security.auth.Subject.doAs(Subject.java:396) 在org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177) 在org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:839) 在org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:813) 在org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435) 在org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 在java.lang.reflect.Method.invoke(Method.java:597) 在org.apache.hadoop.util.RunJar.main(RunJar.java:197) 作业提交失败,异常为'java.io.FileNotFoundException(文件不存在:/ tmp / dm_hdp_dev_batch / hive_2013-06-29_23-06-11_544_4259930697398507763 / -mr-10000/1 / emptyFile)' 退出状态执行失败:1 获取错误信息
任务失败! 任务ID: 阶段-1
日志:
/tmp/dm_hdp_dev_batch/hive.log FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.MapRedTask返回代码1
cat /tmp/dm_hdp_dev_batch/dm_hdp_dev_batch_20130629225252_b7170eb0-69a7-4b23-9c64-c84d4a292b86.log
2013-06-29 22:52:37,674 INFO exec.ExecDriver (SessionState.java:printInfo(391)) - Execution log at: /tmp/dm_hdp_dev_batch/dm_hdp_dev_batch_20130629225252_b7170eb0-69a7-4b23-9c64-c84d4a292b86.log
2013-06-29 22:52:37,933 INFO exec.ExecDriver (ExecDriver.java:execute(320)) - Using org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
2013-06-29 22:52:37,939 INFO exec.ExecDriver (ExecDriver.java:execute(342)) - adding libjars: file:///usr/lib/hive/lib/hive-builtins-0.9.0-cdh3u4b-SNAPSHOT.jar
2013-06-29 22:52:37,939 INFO exec.ExecDriver (ExecDriver.java:addInputPaths(840)) - Processing alias dual
2013-06-29 22:52:37,940 INFO exec.ExecDriver (ExecDriver.java:addInputPaths(858)) - **Adding input file hdfs://biggy.src.com/apps/mktg/m360/dev/data/Interim/intrm_Dummy
2013-06-29 22:52:37,940 INFO exec.Utilities (Utilities.java:isEmptyPath(1807)) - Content **Summary not cached for hdfs://biggy.src.com/apps/mktg/m360/dev/data/Interim/intrm_Dummy
2013-06-29 22:52:38,121 INFO exec.ExecDriver (ExecDriver.java:addInputPath(789)) - Changed input file to file:/tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1****
2013-06-29 22:52:38,131 INFO util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(43)) - Loaded the native-hadoop library
2013-06-29 22:52:38,291 INFO jvm.JvmMetrics (JvmMetrics.java:init(71)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2013-06-29 22:52:38,292 INFO exec.ExecDriver (ExecDriver.java:createTmpDirs(215)) - Making Temp Directory: hdfs://biggy.src.com/tmp/hive-dm_hdp_dev_batch/hive_2013-06-29_22-52-35_955_7873715221223972086/-ext-10001
2013-06-29 22:52:38,302 WARN mapred.JobClient (JobClient.java:copyAndConfigureFiles(655)) - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
2013-06-29 22:52:38,379 WARN snappy.LoadSnappy (LoadSnappy.java:<clinit>(36)) - Snappy native library is available
2013-06-29 22:52:38,380 INFO snappy.LoadSnappy (LoadSnappy.java:<clinit>(44)) - Snappy native library loaded
2013-06-29 22:52:38,384 INFO io.CombineHiveInputFormat (CombineHiveInputFormat.java:getSplits(370)) - CombineHiveInputSplit creating pool for file:/tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1; using filter path file:/tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1
2013-06-29 22:52:38,388 INFO mapred.FileInputFormat (FileInputFormat.java:listStatus(196)) - Total input paths to process : 1
2013-06-29 22:52:38,390 INFO mapred.JobClient (JobClient.java:run(925)) - Cleaning up the staging area file:/tmp/hadoop-dm_hdp_dev_batch/mapred/staging/dm_hdp_dev_batch1888887102/.staging/job_local_0001
2013-06-29 22:52:38,390 ERROR security.UserGroupInformation (UserGroupInformation.java:doAs(1180)) - PriviledgedActionException as:dm_hdp_dev_batch (auth:SIMPLE) cause:java.io.FileNotFoundException: File does not exist: /tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1/emptyFile
2013-06-29 22:52:38,391 ERROR exec.ExecDriver (SessionState.java:printError(400)) - Job Submission failed with exception 'java.io.FileNotFoundException(File does not exist: /tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1/emptyFile)'
java.io.FileNotFoundException: File does not exist: /tmp/dm_hdp_dev_batch/hive_2013-06-29_22-52-37_901_1833503522069219852/-mr-10000/1/emptyFile
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:562)
at org.apache.hadoop.mapred.lib.CombineFileInputFormat$OneFileInfo.<init>(CombineFileInputFormat.java:462)
at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:256)
at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:212)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:392)
at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:358)
at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:387)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:975)
at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:886)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1177)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:839)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:813)
at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:435)
at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:677)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:197)