从Oozie工作流运行的mapreduce中的HBase连接失败

时间:2017-03-08 10:57:01

标签: mapreduce oozie hadoop2 kerberos-delegation

我正在运行我的mapreduce作业,作为来自Oozie工作流程的java动作。 当我在我的hadoop集群中运行我的mapreduce时,它运行成功,但是当我运行时使用来自Oozie工作流程的相同jar,它将被抛出

这是我的工作流程.xml

<workflow-app name="HBaseToFileDriver" xmlns="uri:oozie:workflow:0.1">

    <start to="mapReduceAction"/>
        <action name="mapReduceAction">
                <java>
                         <job-tracker>${jobTracker}</job-tracker>
                        <name-node>${nameNode}</name-node>
                        <prepare>
                                <delete path="${outputDir}"/>
                        </prepare>

                        <configuration>
                                <property>
                                        <name>mapred.mapper.new-api</name>
                                        <value>true</value>
                                </property>
                                <property>
                                        <name>mapred.reducer.new-api</name>
                                        <value>true</value>
                                </property>
                                 <property>
                                        <name>oozie.libpath</name>
                                        <value>${appPath}/lib</value>
                                </property>
                                <property>
                                    <name>mapreduce.job.queuename</name>
                                    <value>root.fricadev</value>
                                </property>

                            </configuration>
                                <main-class>com.thomsonretuers.hbase.HBaseToFileDriver</main-class>

                                    <arg>fricadev:FinancialLineItem</arg>


                                <capture-output/>
                </java>
                <ok to="end"/>
                <error to="killJob"/>
        </action>
        <kill name="killJob">
            <message>"Killed job due to error: ${wf:errorMessage(wf:lastErrorNode())}"</message>
        </kill>
    <end name="end" />
</workflow-app>

当我在YARN中看到日志时,下面是我的例外。 即使显示为成功但输出文件未生成。

1 个答案:

答案 0 :(得分:0)

您是否考虑过Oozie Java Action

IMPORTANT: In order for a Java action to succeed on a secure cluster, it must propagate the Hadoop delegation token like in the following code snippet (this is benign on non-secure clusters):

// propagate delegation related props from launcher job to MR job
if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
    jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
}

您必须从系统env变量获取HADOOP_TOKEN_FILE_LOCATION并设置为属性mapreduce.job.credentials.binary

HADOOP_TOKEN_FILE_LOCATION由oozie在运行时设置。