我正在我的集群上使用Cloudera发行版和Hive的第13版。
我遇到了一个问题,即在写入日志行后作业没有取得任何进展 - “减少任务的数量设置为0,因为没有减少运算符”
下面是相同的日志,你能帮我解决这个问题吗,因为这不是代码问题,好像我重新运行它成功完成的同一个工作。
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/jars/hive-common-0.13.1-cdh5.2.1.jar!/hive-log4j.properties
Total jobs = 5
Launching Job 1 out of 5
Launching Job 2 out of 5
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
Number of reduce tasks not specified. Defaulting to jobconf value of: 10
set mapreduce.job.reduces=<number>
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1431159077692_1399, Tracking URL = xyz.com:8088/proxy/application_1431159077692_1399/
Starting Job = job_1431159077692_1398, Tracking URL = hxyz.com:8088/proxy/application_1431159077692_1398/
Kill Command = /opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job -kill job_1431159077692_1399
Kill Command = /opt/cloudera/parcels/CDH-5.2.1-1.cdh5.2.1.p0.12/lib/hadoop/bin/hadoop job -kill job_1431159077692_1398
Hadoop job information for Stage-12: number of mappers: 5; number of reducers: 10
Hadoop job information for Stage-1: number of mappers: 5; number of reducers: 10
2015-05-12 19:59:12,298 Stage-1 map = 0%, reduce = 0%
2015-05-12 19:59:12,298 Stage-12 map = 0%, reduce = 0%
2015-05-12 19:59:20,832 Stage-1 map = 20%, reduce = 0%, Cumulative CPU 2.5 sec
2015-05-12 19:59:20,832 Stage-12 map = 80%, reduce = 0%, Cumulative CPU 8.63 sec
2015-05-12 19:59:21,905 Stage-1 map = 60%, reduce = 0%, Cumulative CPU 7.06 sec
2015-05-12 19:59:22,968 Stage-1 map = 80%, reduce = 0%, Cumulative CPU 9.34 sec
2015-05-12 19:59:24,031 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 11.46 sec
2015-05-12 19:59:26,265 Stage-12 map = 100%, reduce = 0%, Cumulative CPU 10.92 sec
2015-05-12 19:59:32,665 Stage-12 map = 100%, reduce = 30%, Cumulative CPU 24.51 sec
2015-05-12 19:59:33,726 Stage-12 map = 100%, reduce = 100%, Cumulative CPU 57.61 sec
2015-05-12 19:59:35,021 Stage-1 map = 100%, reduce = 30%, Cumulative CPU 20.99 sec
MapReduce Total cumulative CPU time: 57 seconds 610 msec
Ended Job = job_1431159077692_1399
2015-05-12 19:59:36,084 Stage-1 map = 100%, reduce = 80%, Cumulative CPU 39.24 sec
2015-05-12 19:59:37,146 Stage-1 map = 100%, reduce = 90%, Cumulative CPU 42.37 sec
2015-05-12 19:59:38,203 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 45.97 sec
MapReduce Total cumulative CPU time: 45 seconds 970 msec
Ended Job = job_1431159077692_1398
2015-05-12 19:59:45,180 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.require.client.cert; Ignoring.
2015-05-12 19:59:45,193 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2015-05-12 19:59:45,196 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.client.conf; Ignoring.
2015-05-12 19:59:45,201 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.keystores.factory.class; Ignoring.
2015-05-12 19:59:45,210 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: hadoop.ssl.server.conf; Ignoring.
2015-05-12 19:59:45,258 WARN [main] conf.Configuration (Configuration.java:loadProperty(2510)) - file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10014/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2015-05-12 19:59:45,792 WARN [main] conf.HiveConf (HiveConf.java:initialize(1491)) - DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
Execution log at: /tmp/srv-hdp-mkt-d/srv-hdp-mkt-d_20150512195858_1b598453-78a8-4867-9402-d972e3c067f2.log
2015-05-12 07:59:46 Starting to launch local task to process map join; maximum memory = 257949696
2015-05-12 07:59:47 Dump the side-table into file: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile10--.hashtable
2015-05-12 07:59:47 Uploaded 1 File to: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile10--.hashtable (475 bytes)
2015-05-12 07:59:47 Dump the side-table into file: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile01--.hashtable
2015-05-12 07:59:47 Uploaded 1 File to: file:/tmp/srv-hdp-mkt-d/hive_2015-05-12_19-58-53_081_2145723752519383568-1/-local-10007/HashTable-Stage-4/MapJoin-mapfile01--.hashtable (388 bytes)
2015-05-12 07:59:47 End of local task; Time Taken: 1.209 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 5
Number of reduce tasks is set to 0 since there's no reduce operator
答案 0 :(得分:2)
指定MR任务的队列
hive> set mapred.job.queue.name=long_running;
hive> SELECT * FROM table_name LIMIT 10;
这对我有用。
答案 1 :(得分:1)
您是否在脚本中设置任务,如果是,则删除并再次尝试运行它。 如果你没有工作,我认为不需要创建多个任务。
答案 2 :(得分:1)
我也面临这个问题。为了解决这个问题,我检查了我的所有HDFS服务是否都已启动。当我做'jps'时,我发现我的资源管理器没有启动。所以我继续使用start-yarn.sh
开始它完成上述操作后,仅用了很长时间。随后,如果工作得更快。