我开始使用纱线“oozie job -config /usr/lib/oozie/oozie-4.0.1/examples/apps/pig/job.properties -run”在群集上测试oozie作业,我的工作停留在0%和然后只发送心跳
2014-06-25 12:51:57,800 [JobControl] INFO org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost/127.0.0.1:8032
2014-06-25 12:51:57,831 [JobControl] INFO org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2014-06-25 12:51:58,021 [JobControl] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
2014-06-25 12:51:58,021 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2014-06-25 12:51:58,021 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 1
2014-06-25 12:51:58,022 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2014-06-25 12:51:58,022 [JobControl] INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
2014-06-25 12:51:58,034 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
2014-06-25 12:51:58,084 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1403700612904_0002
2014-06-25 12:51:58,084 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: mapreduce.job, Service: job_1403700612904_0001, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@63ad6884)
2014-06-25 12:51:58,085 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Kind: RM_DELEGATION_TOKEN, Service: 127.0.0.1:8032, Ident: (owner=test, renewer=oozie mr token, realUser=root, issueDate=1403700700415, maxDate=14043$
2014-06-25 12:51:58,352 [JobControl] INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1403700612904_0002 to ResourceManager at localhost/127.0.0.1:8032
2014-06-25 12:51:58,393 [JobControl] INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http://localhost:8088/proxy/application_1403700612904_0002/
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1403700612904_0002
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1403700612904_0002
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases A,B
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[18,4],B[19,4] C: R:
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: A[18,4],B[19,4] C: R:
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1403700612904_0002
2014-06-25 12:51:58,394 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1403700612904_0002
2014-06-25 12:51:58,429 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2014-06-25 12:51:58,429 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
Heart beat
答案 0 :(得分:0)
在使用hadoop-2.2.0在oozie-4.0.1中安排pig-0.12.1时,我也面临同样的问题。
我无法在单节点集群中使用hadoop-2.2.0中的oozie来安排pig脚本。但是我通过进行以下更改在多节点集群中完成了它。
NodeManager和资源管理器在同一系统中运行。所以我得到了这个错误。我在第二个系统中启动了节点管理器,我的问题就解决了。
答案 1 :(得分:0)
由于 Hadoop中的总内存不足,导致出现Heart beat
错误。这是因为您可以在小型集群机器中运行。
<强>解决方案:强> 因此,您希望将所有节点管理器的总内存大小增加到执行mapreduce作业。 此链接"HEART BEAT ERROR SOLUTION"
中包含以下步骤