我正在努力缩短我的oozie工作流程,我正在推荐这个cloudera帖子。 http://blog.cloudera.com/blog/2013/11/how-to-shorten-your-oozie-workflow-definitions/
根据这篇文章,我们可以在全局部分添加job.xml文件,但它不适用于我。
<global>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<job-xml>job1.xml</job-xml>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
</global>
-----------------------主要工作流程---------------------- --------
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.4" name="main-wf">
<global>
<job-xml>/user/${wf:user()}/${examplesRoot}/apps/map-reduce/job.xml</job-xml>
</global>
<start to="main-node"/>
<action name="main-node">
<sub-workflow>
<app-path>/user/${wf:user()}/${examplesRoot}/apps/map-reduce/workflow.xml</app-path>
</sub-workflow>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>MR failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
----------------------- Sub Workflow ---------------------- --------
<?xml version="1.0" encoding="UTF-8"?>
<workflow-app xmlns="uri:oozie:workflow:0.4" name="OozieBDU-wf">
<start to="wordcount-node"/>
<action name="wordcount-node">
<map-reduce>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}"/>
</prepare>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
<property>
<name>mapred.mapper.new-api</name>
<value>true</value>
</property>
<property>
<name>mapred.reducer.new-api</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.class</name>
<value>com.yumecorp.WordCount$TokenizerMapper</value>
</property>
<property>
<name>mapreduce.reduce.class</name>
<value>com.yumecorp.WordCount$IntSumReducer</value>
</property>
<property>
<name>mapred.output.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapred.output.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>/user/${wf:user()}/${examplesRoot}/input-data/decodeAndProfile</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>/user/${wf:user()}/${examplesRoot}/output-data/${outputDir}</value>
</property>
<property>
<name>mapreduce.job.outputformat.class</name>
<value>org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat</value>
</property>
<property>
<name>mapred.mapoutput.key.class</name>
<value>org.apache.hadoop.io.Text</value>
</property>
<property>
<name>mapred.mapoutput.value.class</name>
<value>org.apache.hadoop.io.IntWritable</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>MR failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
----------------------- job.xml --------------------- ---------
<configuration>
<property>
<name>jobTracker</name>
<value>pari.hhh.com:8021</value>
</property>
<property>
<name>nameNode</name>
<value>hdfs://pari.hhh.com:8020</value>
</property>
<property>
<name>queueName</name>
<value>default</value>
</property>
<property>
<name>examplesRoot</name>
<value>wordcount</value>
</property>
<property>
<name>outputDir</name>
<value>map-reduce</value>
</property>
</configuration>
----------------------- job.properties --------------------- --------- examplesRoot =单词计数 oozie.wf.application.path = $ {名称节点} /用户/ $ {user.name} / $ {examplesRoot} /apps/map-reduce/mainworkflow.xml
----------------------- oozie command ---------------------- ------- oozie job -config job.properties -run
-----------------------错误----------------------- -------------- OB [0000012-150131091133585-oozie-oozi-W]行动[0000012-150131091133585-oozie-oozi-W @ wordcount-node] ActionStartXCommand中的ELException javax.servlet.jsp.el.ELException:变量[jobTracker]无法解析 at org.apache.oozie.util.ELEvaluator $ Context.resolveVariable(ELEvaluator.java:106) 在org.apache.commons.el.NamedValue.evaluate(NamedValue.java:124) 在org.apache.commons.el.ExpressionString.evaluate(ExpressionString.java:114) at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:274) at org.apache.commons.el.ExpressionEvaluatorImpl.evaluate(ExpressionEvaluatorImpl.java:190) 在org.apache.oozie.util.ELEvaluator.evaluate(ELEvaluator.java:203) 在org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:188) 在org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:63) 在org.apache.oozie.command.XCommand.call(XCommand.java:281) 在org.apache.oozie.service.CallableQueueService $ CompositeCallable.call(CallableQueueService.java:323) 在org.apache.oozie.service.CallableQueueService $ CompositeCallable.call(CallableQueueService.java:252) 在org.apache.oozie.service.CallableQueueService $ CallableWrapper.run(CallableQueueService.java:174) 在java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:615) 在java.lang.Thread.run(Thread.java:745)
任何人都可以帮忙??
干杯 彩