我是Hadoop&的新手Hive,每当我开始执行Hive MapReduce查询(如SELECT COUNT(*),Avg()或在Hbase中加载数据等)时,它会显示以下错误,我用Google搜索但尚无解决方案。 而其他常规查询(例如select *,create,use)运行良好。
hive> select count(*) from test_table;
Query ID = dev4_20171016095209_43c4e980-efbd-42d3-94d4-1a4b8de3d956
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1508127394848_0001, Tracking URL = http://dev4:8088/proxy/application_1508127394848_0001/
Kill Command = /usr/local/hadoop-2.8.1//bin/hadoop job -kill job_1508127394848_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2017-10-16 09:52:38,820 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1508127394848_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
hadoop - (mapred-site.xml)
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.map.memory.mb</name>
<value>8192</value>
</property>
<property>
<name>mapred.job.reduce.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx2048M</value>
</property>
</configuration>
hadoop - core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}</value>
</property>
</configuration>
Hadoop - mapreduce-env.sh
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-i386/
export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
export HIVE_HOME=/usr/local/hive
#export HADOOP_JOB_HISTORYSERVER_OPTS=
#export HADOOP_MAPRED_LOG_DIR="" # Where log files are stored. $HADOOP_MAPRED_HOME/logs by default.
#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.
#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default.
#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default
#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0.
Hadoop - yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
Hive - hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
<description>metadata is stored in a MySQL server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>MySQL JDBC driver class</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
<description>user name for connecting to mysql server</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>harileela</value>
<description>password for connecting to mysql server</description>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>file:///usr/local/hive/lib/hive-serde-1.2.2.jar</value>
<description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description>
</property>
<property>
<name>hive.exec.reducers.bytes.per.reducer</name>
<value>1000000</value>
</property>
</configuration>
这是我的应用程序概述:
User: dev4
Name: select count(*) from test_table(Stage-1)
Application Type: MAPREDUCE
Application Tags:
Application Priority: 0 (Higher Integer value indicates higher priority)
YarnApplicationState: FAILED
Queue: default
FinalStatus Reported by AM: FAILED
Started: Mon Oct 16 13:10:37 +0530 2017
Elapsed: 8sec
Tracking URL: History
Log Aggregation Status: DISABLED
Diagnostics:
Application application_1508139045948_0002 failed 2 times due to AM Container for appattempt_1508139045948_0002_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1508139045948_0002_02_000001
Exit code: 127
Exception message: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
Stack trace: ExitCodeException exitCode=127: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 127
For more detailed output, check the application tracking page: http://dev4:8088/cluster/app/application_1508139045948_0002 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application: false
Application Node Label expression: <Not set>
AM container Node Label expression: <DEFAULT_PARTITION>
我不知道问题出在哪里? 谢谢。