Spark为一个包含多列的rowKey创建HFile

时间:2017-09-22 06:56:55

标签: apache-spark hbase hfile

JavaRDD<String> hbaseFile = jsc.textFile(HDFS_MASTER+HBASE_FILE);
JavaPairRDD<ImmutableBytesWritable, KeyValue> putJavaRDD = hbaseFile.mapToPair(line -> convertToKVCol1(line, COLUMN_AGE));
putJavaRDD.sortByKey(true);
putJavaRDD.saveAsNewAPIHadoopFile(stagingFolder, ImmutableBytesWritable.class, KeyValue.class, HFileOutputFormat2.class, conf);

private static Tuple2<ImmutableBytesWritable, KeyValue> convertToKVCol1(String beanString, byte[] column) {
    InspurUserEntity inspurUserEntity = gson.fromJson(beanString, InspurUserEntity.class);
    String rowKey = inspurUserEntity.getDepartment_level1()+"_"+inspurUserEntity.getDepartment_level2()+"_"+inspurUserEntity.getId();
    return new Tuple2<>(new ImmutableBytesWritable(Bytes.toBytes(rowKey)),
            new KeyValue(Bytes.toBytes(rowKey), COLUMN_FAMILY, column, Bytes.toBytes(inspurUserEntity.getAge())));
}

上面是我的代码,只适用于单列,任何人都有任何想法可以为一个rowKey创建带有多列的HFile?

2 个答案:

答案 0 :(得分:0)

您必须在声明中使用数组而不是ImmutableBytesWritable。

答案 1 :(得分:0)

您可以为一行创建多个Tuple2<ImmutableBytesWritable, KeyValue>,其中键保持不变,而KeyValue代表单个单元格值。 确保按字典顺序对列进行排序。 因此,您应该在saveAsNewAPIHadoopFile上调用JavaPairRDD<ImmutableBytesWritable, KeyValue>

    final JavaPairRDD<ImmutableBytesWritable, KeyValue> writables = myRdd.flatMapToPair(record -> {
     final List<Tuple2<ImmutableBytesWritable, KeyValue>> listToReturn = new ArrayList<>();
     // Add first column to the collection
     listToReturn.add(new Tuple2<ImmutableBytesWritable, KeyValue>(
                            new ImmutableBytesWritable(Bytes.toBytes(record.getRowKey())),
                            new KeyValue(Bytes.toBytes(record.getRowKey()), Bytes.toBytes("CF"),
                                    Bytes.toBytes("COL1"), System.currentTimeMillis(),
                                    Bytes.toBytes(record.getCol1()))));
    // Add subsequent columns
    listToReturn.add(new Tuple2<ImmutableBytesWritable, KeyValue>(
                            new ImmutableBytesWritable(Bytes.toBytes(record.getRowKey())),
                            new KeyValue(Bytes.toBytes(record.getRowKey()), Bytes.toBytes("CF"),
                                    Bytes.toBytes("COL2"), System.currentTimeMillis(),
                                    Bytes.toBytes(record.getCol2()))));
});

注意:这是一个主要的难题,您必须将列也按字典顺序添加到RDD中。

本质上,以下组合:行键+列族+列限定符应先进行排序,然后再推出HFiles。