Error while executing shell-script using oozie

时间:2018-07-25 05:26:35

标签: apache-kafka oozie confluent oozie-coordinator oozie-workflow

I'm trying to run kafka-connect-hdfs using Oozie version: 4.2.0.2.6.5.0-292 via script file sample.sh.
Yes I do know we can run the kafka-hdfs connector directly, but it should happen via oozie.
Kafka has a topic sample and has some data in it.
Trying to push that data to hdfs via oozie.
I have referred a lot of resources before coming here but now luck.

ERROR

Launcher ERROR, reason: Main class [org.apache.oozie.action.hadoop.ShellMain], exit code [1]
2018-07-25 09:54:16,945  INFO ActionEndXCommand:520 - SERVER[nnuat.iot.com] USER[root] GROUP[-] TOKEN[] APP[sample] JOB[0000000-180725094930282-oozie-oozi-W] ACTION[0000000-180725094930282-oozie-oozi-W@shell1] ERROR is considered as FAILED for SLA

I have all the three files inside hdfs and gave permissions to all the files (sample.sh, job.properties, workflow.xml) having all the files inside the location /user/root/sample in hdfs.

Note : Running the oozie in cluster so all the three nodes have the same path and files in it as namenode(/root/oozie-demo) and confluent-kafka(/opt/confluent-4..1.1) too.

job.properties

nameNode=hdfs://171.18.1.192:8020
jobTracker=171.18.1.192:8050
queueName=default
oozie.libpath=${nameNode}/user/oozie/share/lib/lib_20180703063118
oozie.wf.rerun.failnodes=true
oozie.use.system.libpath=true
oozieProjectRoot=${nameNode}/user/${user.name}
oozie.wf.application.path=${nameNode}/user/${user.name}/sample

workflow.xml

<workflow-app xmlns="uri:oozie:workflow:0.3" name="sample">
<start to="shell1"/>
<action name="shell1">
<shell xmlns="uri:oozie:shell-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
             <property>
                   <name>hadoop.proxyuser.oozie.hosts</name>
                  <value>*</value>
             </property>
             <property>
                   <name>hadoop.proxyuser.oozie.groups</name>
                   <value>*</value>
            </property>
            <property>
                    <name>oozie.launcher.mapreduce.map.java.opts</name>
                   <value>-verbose</value>
            </property>
        </configuration>
    <!--<exec>${myscript}</exec>-->
        <exec>smaple.sh</exec>
         <env-var>HADOOP_USER_NAME=${wf:user()}</env-var>
        <file>hdfs://171.18.1.192:8020/user/root/sample/smaple.sh</file>
         <capture-output/>
    </shell>
    <ok to="end"/>
    <error to="fail"/>
</action>
<kill name="fail">
<message>Shell action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<kill name="fail-output">
    <message>Incorrect output, expected [Hello Oozie] but was [${wf:actionData('shellaction')['my_output']}]</message>
</kill>
<end name="end"/>
</workflow-app>

sample.sh #!/bin/bash

 sudo /opt/confluent-4.1.1/bin/connect-standalone /opt/confluent-4.1.1/etc/schema-registry/connect-avro-standalone.properties /opt/confluent-4.1.1/etc/kafka-connect-hdfs/IOT_DEMO-hdfs.properties

I could not able to find the cause of the Error, I have also tried putting all the jars inside confluent-kafka to oozie/lib directory in hdfs.

link for yarn and oozie error logs.yarn-oozie-error-logs

Thanks!

1 个答案:

答案 0 :(得分:0)

Kafka Connect旨在完全独立运行,而不是通过Oozie安排。

它不会死,除非发生错误,并且如果Oozie重新启动失败的任务,几乎可以保证您会在HDFS上获得重复的数据,因为Connect偏移量不会永久存储在除本地磁盘之外的任何位置(假设Connect在一个磁盘上重新启动)单独的机器),所以我不明白这一点。

您应该改为在一组专用的计算机上作为系统服务独立运行connect-distributed.sh,然后将配置JSON发布到Connect HTTP端点。然后,任务将作为Connect框架的一部分进行分配,并且偏移量会永久存储回Kafka主题中,以实现容错


如果您绝对想使用Oozie,则Confluent包含Camus工具,不建议使用Connect,但我一直在维护Camus + Oozie程序已有一段时间,并且它运行良好,很难做到。添加大量主题后,监视失败。 Apache Gobbilin是该项目的第二个迭代,不是由Confluent维护的。

似乎您正在运行HDP,因此Apache Nifi也应该能够安装在您的集群上,以处理与Kafka和HDFS相关的任务