hadoop坚持“跑步工作”

时间:2016-09-02 03:06:08

标签: hadoop hive

我想从doc运行hadoop字数统计程序。但是该计划停留在running job

16/09/02 10:51:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/09/02 10:51:13 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/09/02 10:51:13 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/09/02 10:51:14 INFO input.FileInputFormat: Total input paths to process : 1
16/09/02 10:51:14 INFO mapreduce.JobSubmitter: number of splits:2
16/09/02 10:51:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1472783047951_0003
16/09/02 10:51:14 INFO impl.YarnClientImpl: Submitted application application_1472783047951_0003
16/09/02 10:51:14 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1472783047951_0003/
16/09/02 10:51:14 INFO mapreduce.Job: Running job: job_1472783047951_0003

并显示http://hadoop-master:8088/proxy/application_1472783047951_0003/ enter image description here

它在http://hadoop-slave2:8042上运行AppMaster,显示它 enter image description here

但是,由于它停留在WordCount上,它也停留在Hive

hive (default)> select a, b, count(1) as cnt from newtb group by a, b;
Query ID = hadoop_20160902110124_d2b2680b-c493-4986-aa84-f65794bfd8e4
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1472783047951_0004, Tracking URL = http://hadoop-master:8088/proxy/application_1472783047951_0004/
Kill Command = /opt/hadoop-2.6.4/bin/hadoop job  -kill job_1472783047951_0004

select *。

没有错
hive (default)> select * from newtb;
OK
1   2   3
1   3   4
2   3   4
5   6   7
8   9   0
1   8   3
Time taken: 0.101 seconds, Fetched: 6 row(s)

所以,我认为MapReduce有问题。有足够的磁盘存储器。那么,如何解决呢?

1 个答案:

答案 0 :(得分:1)

您遇到问题,因为应用程序主服务器无法启动容器并运行该作业。首先尝试重新启动系统,如果没有更改,则必须更改yarn-site.xml和mapred-site.xml中的内存分配。使用基本的内存设置。

使用以下链接 http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/#yarn-configuration_1

here

here