如何在oozie

时间:2017-04-17 17:44:36

标签: shell hadoop hdfs oozie oozie-coordinator

我在HDFS中有一个shell脚本。我已经使用以下工作流程在oozie中安排了此脚本。

工作流:

<workflow-app name="Shell_test" xmlns="uri:oozie:workflow:0.5">
<start to="shell-8f63"/>
<kill name="Kill">
    <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<action name="shell-8f63">
    <shell xmlns="uri:oozie:shell-action:0.1">
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <exec>shell.sh</exec>
        <argument>${input_file}</argument>
        <env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
        <file>/user/xxxx/shell_script/lib/shell.sh#shell.sh</file>
        <file>/user/xxxx/args/${input_file}#${input_file}</file>
    </shell>
    <ok to="End"/>
    <error to="Kill"/>
</action>
<end name="End"/>

工作属性

nameNode=xxxxxxxxxxxxxxxxxxxx
jobTracker=xxxxxxxxxxxxxxxxxxxxxxxx
queueName=default
oozie.use.system.libpath=true
oozie.wf.application.path=${nameNode}/user/${user.name}/xxxxxxx/xxxxxx 

args文件

tableA
tableB
tablec
tableD

现在,shell脚本在args文件中运行单个作业名称。如何安排此shell脚本并行运行。

我希望脚本能够同时运行10个作业。

这样做需要做什么步骤。我应该对工作流程做出哪些更改。

我应该为运行10个并行作业创建10个工作流程。或者处理这个问题的最佳方案是什么。

我的shell脚本:

#!/bin/bash

[ $# -ne 1 ] && { echo "Usage : $0 table ";exit 1; }

table=$1

job_name=${table}

sqoop job  --exec ${job_name}

我的sqoop工作脚本:

sqoop job --create ${table} -- import --connect ${domain}:${port}/${database} --username ${username} --password ${password} --query "SELECT * from ${database}.${table} WHERE \$CONDITIONS" -m 1 --hive-import --hive-database ${hivedatabase} --hive-table ${table} --as-parquetfile --incremental append --check-column id --last-value "${last_val}"  --target-dir /user/xxxxx/hive/${hivedatabase}.db/${table} --outdir /home/$USER/logs/outdir

2 个答案:

答案 0 :(得分:4)

要并行运行作业,您可以在其中创建包含fork的workflow.xml。请参阅以下示例,它将为您提供帮助。

如果您注意到下面的XML,您会看到我通过传递不同的配置文件使用相同的脚本,在您的情况下,您必须从配置文件传递所需的不同表名,或者您也可以通过在您的workflow.XML

以sqoop作为例子,您的sqoop应该在.sh脚本中,如下所示:

sqoop job --create ${table} -- import --connect ${domain}:${port}/${database} --username ${username} --password ${password} --query "SELECT * from "${database}"."${table}" WHERE \$CONDITIONS" -m 1 --hive-import --hive-database "${hivedatabase}" --hive-table "${hivetable}" --as-parquetfile --incremental append --check-column id --last-value "${last_val}"  --target-dir /user/xxxxx/hive/${hivedatabase}.db/${table} --outdir /home/$USER/logs/outdir

所以基本上你会把你的sqoop作业写成通用的,因为它应该从workflow.xml中预期hive表,数据库,源表,源数据库名称。这样,您将为所有操作调用相同的脚本,但工作流操作中的Env-var将更改。请参阅我对第一个操作所做的以下更改。

&#13;
&#13;
 <workflow-app xmlns='uri:oozie:workflow:0.5' name='Workflow_Name'>
    <start to="forking"/>
     
     <fork name="forking">
      <path start="shell-8f63"/>
      <path start="shell-8f64"/>
      <path start="SCRIPT3CONFIG3"/>
      <path start="SCRIPT4CONFIG4"/>
      <path start="SCRIPT5CONFIG5"/>
      <path start="script6config6"/>
    </fork>

    <action name="shell-8f63">
    <shell xmlns="uri:oozie:shell-action:0.1">
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <exec>shell.sh</exec>
        <argument>${input_file}</argument>
		<env-var>database=sourcedatabase</env-var>
	<env-var>table=sourcetablename</env-var>
	<env-var>hivedatabase=yourhivedataabsename</env-var>
	<env-var>hivetable=yourhivetablename</env-var>
	<env-var>You can pass how many ever variables you want between the env-var</env-var>
	<env-var>parameters should be passed with double quotes in order to work through shell actions</env-var>
	<env-var></env-var> 
        <env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
        <file>/user/xxxx/shell_script/lib/shell.sh#shell.sh</file>
        <file>/user/xxxx/args/${input_file}#${input_file}</file>
    </shell>	 
     <ok to="joining"/>
     <error to="sendEmail"/>
     </action>

    <action name="shell-8f64">
   <shell xmlns="uri:oozie:shell-action:0.1">
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <exec>shell.sh</exec>
        <argument>${input_file}</argument>
		<env-var>database=sourcedatabase1</env-var>
	<env-var>table=sourcetablename1</env-var>
	<env-var>hivedatabase=yourhivedataabsename1</env-var>
	<env-var>hivetable=yourhivetablename2</env-var>
	<env-var>You can pass how many ever variables you want between the env-var</env-var>
	<env-var>parameters should be passed with double quotes in order to work through shell actions</env-var>
	<env-var></env-var> 
        <env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
        <file>/user/xxxx/shell_script/lib/shell.sh#shell.sh</file>
        <file>/user/xxxx/args/${input_file}#${input_file}</file>
    </shell>
    <ok to="joining"/>
    <error to="sendEmail"/>
    </action>

    <action name="SCRIPT3CONFIG3">
    <shell xmlns="uri:oozie:shell-action:0.1">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <configuration>
    <property>
    <name>mapred.job.queue.name</name>
    <value>${queueName}</value>
    </property>
    </configuration>
    <exec>COMMON_SCRIPT_YOU_WANT_TO_USE.sh</exec>
    <argument>SQOOP_2</argument>
    <env-var>UserName</env-var>
    <file>${nameNode}/${projectPath}/COMMON_SCRIPT_YOU_WANT_TO_USE.sh#COMMON_SCRIPT_YOU_WANT_TO_USE.sh</file>
    <file>${nameNode}/${projectPath}/THIRD_CONFIG</file>

    </shell>	 
    <ok to="joining"/>
    <error to="sendEmail"/>
    </action>

    <action name="SCRIPT4CONFIG4">
    <shell xmlns="uri:oozie:shell-action:0.1">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <configuration>
    <property>
    <name>mapred.job.queue.name</name>
    <value>${queueName}</value>
    </property>
    </configuration>
    <exec>COMMON_SCRIPT_YOU_WANT_TO_USE.sh</exec>
    <argument>SQOOP_2</argument>
    <env-var>UserName</env-var>
    <file>${nameNode}/${projectPath}/COMMON_SCRIPT_YOU_WANT_TO_USE.sh#COMMON_SCRIPT_YOU_WANT_TO_USE.sh</file>
    <file>${nameNode}/${projectPath}/FOURTH_CONFIG</file>

    </shell>	 
    <ok to="joining"/>
    <error to="sendEmail"/>
    </action>

    <action name="SCRIPT5CONFIG5">
    <shell xmlns="uri:oozie:shell-action:0.1">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <configuration>
    <property>
    <name>mapred.job.queue.name</name>
    <value>${queueName}</value>
    </property>
    </configuration>
    <exec>COMMON_SCRIPT_YOU_WANT_TO_USE.sh</exec>
    <argument>SQOOP_2</argument>
    <env-var>UserName</env-var>
    <file>${nameNode}/${projectPath}/COMMON_SCRIPT_YOU_WANT_TO_USE.sh#COMMON_SCRIPT_YOU_WANT_TO_USE.sh</file>
    <file>${nameNode}/${projectPath}/FIFTH_CONFIG</file>

    </shell>	 
    <ok to="joining"/>
    <error to="sendEmail"/>
    </action>

    <action name="script6config6">
    <shell xmlns="uri:oozie:shell-action:0.1">
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>${nameNode}</name-node>
    <configuration>
    <property>
    <name>mapred.job.queue.name</name>
    <value>${queueName}</value>
    </property>
    </configuration>
    <exec>COMMON_SCRIPT_YOU_WANT_TO_USE.sh</exec>
    <argument>SQOOP_2</argument>
    <env-var>UserName</env-var>
    <file>${nameNode}/${projectPath}/COMMON_SCRIPT_YOU_WANT_TO_USE.sh#COMMON_SCRIPT_YOU_WANT_TO_USE.sh</file>
    <file>${nameNode}/${projectPath}/SIXTH_CONFIG</file>

    </shell>	 
    <ok to="joining"/>
    <error to="sendEmail"/>
    </action>

    <join name="joining" to="end"/>

    <action name="sendEmail">
    <email xmlns="uri:oozie:email-action:0.1">
    <to>youremail.com</to>
    <subject>your subject</subject>
    <body>your email body</body>
    </email>
    <ok to="kill"/>
    <error to="kill"/>
    </action>
     
    <kill name="kill">
    <message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
    </workflow-app>
&#13;
&#13;
&#13;

如果您想要运行并行操作,我已经向您展示了上面的6个并行作业示例,您可以在开始时添加更多并行操作并在工作流中编写操作。

以下是从HUE看起来的样子

enter image description here

答案 1 :(得分:0)

据我了解,您需要在Oozie中并行运行'x'个作业。这个'x'可能每次都有所不同。你能做的是,

拥有包含2个操作的工作流程。

  1. Shell Action
  2. SubWorkflow Action
    1. Shell Action - 这将运行一个shell脚本,它将基于你的'x'动态决定你需要选择哪个表等,并创建一个.xml,它将作为子工作流动作的工作流xml。是下一个。此子工作流操作将具有“fork”shell作业,以使它们能够并行运行。请注意,您需要将此xml放在HDFS中,以便它可用于您的子工作流程。

    2. 子工作流程操作 - 它只会执行上一步操作中创建的工作流程。