在Spring XD批处理作业中使用Hbase表作为源和接收器的MapReduce作业

时间:2015-04-07 18:59:34

标签: spring mapreduce hbase

我们如何配置在Spring中使用Hbase表作为源和接收器的MapReduce作业,我正在计划使用Mapreduce作业创建Spring XD的批处理作业,但我想使用Hbase表作为源并接收这个hadoop作业。类似于TableMapReduceUtil.initTableMapperJob(),TableMapReduceUtil.initTableReducerJob()

<hdp:job>命名空间当前不支持提供输入/输出表

1 个答案:

答案 0 :(得分:0)

我能够通过使用另一个bean来解决此问题,该bean将hadoop作业作为输入并在设置Scan()和源和接收Hbase表之后返回作业。使用范围=&#34;工作&#34; on and scope =&#34; prototype&#34;我可以在Spring XD中多次运行相同的MR作业,如果没有这个,你将在第一次成功运行后让Job处于RUNNING状态而不是DEFINE状态问题。

public class InitJobTasklet {
   private Job job;


    public void setJob(Object job){
     this.job = (Job)job;
    }

     public Job getJob() throws IOException {


         Scan scan = new Scan();
         System.out.println("Initializing the hadoop job with hbase tables and scan object... ");
         TableMapReduceUtil.initTableMapperJob("SourceTable", 
                                        scan, 
                                        Mapper.class,
                                        Text.class, Result.class, job);

         TableMapReduceUtil.initTableReducerJob(
                "TargetTable",      // output table
                Reducer.class,             // reducer class
                job);
                job.setNumReduceTasks(1);


          return job;
 }

}

spring批处理作业配置文件:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:hdp="http://www.springframework.org/schema/hadoop"
xmlns:batch="http://www.springframework.org/schema/batch"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/hadoop http://www.springframework.org/schema/hadoop/spring-hadoop.xsd
    http://www.springframework.org/schema/batch http://www.springframework.org/schema/batch/spring-batch.xsd">


<hdp:job id="mr-hbase-job"
    output-path="/output"
  mapper="mapperclass" reducer="reduceclass"
  map-key="org.apache.hadoop.hbase.io.ImmutableBytesWritable" map-value="org.apache.hadoop.hbase.client.Result" input-format="org.apache.hadoop.hbase.mapreduce.TableInputFormat" output-format="org.apache.hadoop.hbase.mapreduce.TableOutputFormat" jar-by-class="processor class" scope="prototype">

</hdp:job>

<batch:job id="job"  >
              <batch:step id="step1">
        <hdp:job-tasklet id="hadoop-tasklet" job="#{initTask.job}" wait-for-completion="true"  scope="job"/>
    </batch:step>
</batch:job>



<hdp:configuration id="hadoopConfiguration">
     fs.defaultFS=hdfs://localhost:9000
 hadoop.tmp.dir=/home/smunigati/hadoop/temp
 hbase.zookeeper.quorum=localhost
 hbase.zookeeper.property.clientPort=2181
    </hdp:configuration>

<hdp:hbase-configuration id="hbaseConfiguration" configuration-ref="hadoopConfiguration">
</hdp:hbase-configuration>


   <bean id="initTask" class="com.somthing.InitJobTasklet" scope="prototype" >
    <property name="job" ref="mr-hbase-job" />  
    </bean>