Hbase Bulkload附加数据而不是覆盖它们

时间:2017-03-03 10:18:42

标签: java hadoop mapreduce hbase

我正在借助Mapreduce和Bulkload将数据加载到Hbase中,这是我用Java实现的。 所以基本上我创建了一个Mapper并使用HFileOutputFormat2.configureIncrementalLoad(问题末尾的完整代码)进行reduce,我使用了一个mapper,它只是从文件中读取一些字节并创建一个put。使用LoadIncrementalHFiles.doBulkLoad写出来将数据写入Hbase。一切都很好。但是,当这样做时,它会覆盖Hbase中的旧值。所以我正在寻找一种附加数据的方法,比如api的append函数。 感谢阅读,希望你们中的一些人有一个可以帮助我的想法:)

public int run(String[] args) throws Exception {
    int result=0;
    String outputPath = args[1];
    Configuration configuration = getConf();
    configuration.set("data.seperator", DATA_SEPERATOR);
    configuration.set("hbase.table.name",TABLE_NAME);
    configuration.set("COLUMN_FAMILY_1",COLUMN_FAMILY_1);
    configuration.set("COLUMN_FAMILY_2",COLUMN_FAMILY_2);

    Job job = Job.getInstance(configuration);
    job.setJarByClass(HBaseBulkLoadDriver.class);
    job.setJobName("Bulk Loading HBase Table::"+TABLE_NAME);
    job.setInputFormatClass(TextInputFormat.class);
    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapperClass(HBaseBulkLoadMapper.class);

    FileInputFormat.addInputPaths(job, args[0]);
    FileSystem.getLocal(getConf()).delete(new Path(outputPath), true);
    HFileOutputFormat2.setOutputPath(job,new Path((outputPath)));
    job.setMapOutputValueClass(Put.class);
    Connection c = ConnectionFactory.createConnection(configuration);
    Table t = c.getTable(TableName.valueOf(TABLE_NAME));
    RegionLocator rl = c.getRegionLocator(TableName.valueOf(TABLE_NAME));
    HFileOutputFormat2.configureIncrementalLoad(job,t,rl);
    System.out.println("start");
    job.waitForCompletion(true);
    if (job.isSuccessful()) {
        HBaseBulkLoad.doBulkLoad(outputPath, TABLE_NAME);
    } else {

        result = -1;
    }
    return result;
}



public static void doBulkLoad(String pathToHFile, String tableName) {
    try {
        Configuration configuration = new Configuration();
        configuration.set("mapreduce.child.java.opts", "-Xmx1g");
        HBaseConfiguration.addHbaseResources(configuration);
        LoadIncrementalHFiles loadFfiles = new LoadIncrementalHFiles(configuration);


        //HTable hTable = new HTable(configuration, tableName);
        //loadFfiles.doBulkLoad(new Path(pathToHFile), hTable);

        Connection connection = ConnectionFactory.createConnection(configuration);
        Table table = connection.getTable(TableName.valueOf(tableName));
        Admin admin = connection.getAdmin();
        RegionLocator regionLocator = connection.getRegionLocator(TableName.valueOf(tableName));
        //path, admin, table, region locator
        loadFfiles.doBulkLoad(new Path(pathToHFile),admin,table,regionLocator);


        System.out.println("Bulk Load Completed..");
    } catch(Exception exception) {
        exception.printStackTrace();
    }

根据评论中的要求,我在这里添加了表格描述的输出,因为表格是由python happybase api创建的,我不知道api默认设置了什么optionflags ...

{NAME => '0', BLOOMFILTER => 'NONE', VERSIONS => '3', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_B LOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'false', BLO CKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME => '1', BLOOMFILTER => 'NONE', VERSIONS => '3', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_B LOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'false', BLO CKSIZE => '65536', REPLICATION_SCOPE => '0'}

1 个答案:

答案 0 :(得分:1)

在HFileOutputFormat2.configureIncrementalLoad()http://atetric.com/atetric/javadoc/org.apache.hbase/hbase-server/1.2.4/src-html/org/apache/hadoop/hbase/mapreduce/HFileOutputFormat2.html#line.408中 PutSortReducer用作减速器。

在PutSortReducer.reduce()http://atetric.com/atetric/javadoc/org.apache.hbase/hbase-server/1.2.4/src-html/org/apache/hadoop/hbase/mapreduce/PutSortReducer.html中 KeyValues存储在TreeSet中,比较器仅比较键。这就是为什么只有一个价值存活的原因。

要保留2个值,您可以基于PutSortReducer创建自己的reducer,您可以在其中保留2个值。并设置它:

HFileOutputFormat2.configureIncrementalLoad(作业,T,RL); job.setReducerClass(MyReducer.class);