HFile创建 - 添加了一个不比以前的键大词汇量的密钥

时间:2014-09-16 03:15:10

标签: hadoop mapreduce hbase

我有一个创建Put文件的程序 -

    Put put = new Put(Bytes.add(someKey));
    put.add(COLUMN_FAMILY, colName, timeStamp, dataByteArr); 
    return put;

我正在尝试使用以下代码为这些Puts创建Hfiles。

    FileInputFormat.setInputPaths(job, new Path(baseDir + "/" + childInputDir + "*"));

    job.setInputFormatClass(TolerantSequenceFileInputFormat.class);

    job.setMapperClass(KeyPutImporter.class);
    HTable htable = new HTable(conf, tableName);

    job.setMapOutputKeyClass(ImmutableBytesWritable.class);
    job.setMapOutputValueClass(Put.class);
    job.setOutputFormatClass(HFileOutputFormat.class);

    HFileOutputFormat.configureIncrementalLoad(job, htable);
    Path hfileOutputPath = new Path(baseDir + "/" + childOutputDir);
    HFileOutputFormat.setOutputPath(job, hfileOutputPath);

    TableMapReduceUtil.addDependencyJars(job.getConfiguration(), com.google.common.base.Preconditions.class);

    boolean success = job.waitForCompletion(true);

在创建HFiles时,我得到以下异常。

    java.io.IOException: Added a key not lexically larger than previous key=
    at org.apache.hadoop.hbase.io.hfile.HFile$Writer.checkKey(HFile.java:577)
    at org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:533)
    at org.apache.hadoop.hbase.io.hfile.HFile$Writer.append(HFile.java:501)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:141)
    at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:96)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:514)
    at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
    at org.apache.hadoop.hbase.mapreduce.PutSortReducer.reduce(PutSortReducer.java:72)

你能帮忙吗?

0 个答案:

没有答案