读取HDFS并存储HBase时,“java.io.IOException:传递删除或放置”

时间:2014-02-19 16:53:29

标签: java mapreduce hbase hdfs put

我在一周内对这个错误感到很疯狂。有一个帖子有同样的问题Pass a Delete or a Put error in hbase mapreduce。但是那个决议并没有真正起作用。

我的司机:

 Configuration conf = HBaseConfiguration.create();
    Job job;
    try {
        job = new Job(conf, "Training");
        job.setJarByClass(TrainingDriver.class);
        job.setMapperClass(TrainingMapper.class);
        job.setMapOutputKeyClass(LongWritable.class);
        job.setMapOutputValueClass(Text.class);
        FileInputFormat.setInputPaths(job, new Path("my/path"));
        Scan scan = new Scan();
        scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
        scan.setCacheBlocks(false);  // don't set to true for MR jobs
        // set other scan attrs
        TableMapReduceUtil.initTableReducerJob(Constants.PREFIX_TABLE,
                TrainingReducer.class, job);
        job.setReducerClass(TrainingReducer.class);
        //job.setNumReduceTasks(1);   // at least one, adjust as required
        try {
            job.waitForCompletion(true);
        } catch (ClassNotFoundException | InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }

    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

我的映射器:

public class TrainingMapper extends
    Mapper<LongWritable, Text, LongWritable, Text> {

    public void map(LongWritable key, Text value,
        Context context)
        throws IOException {
    context.write(key, new Text(generateNewText());
}

我的减速机

public class TrainingReducer extends TableReducer<LongWritable,Text,ImmutableBytesWritable>{

    public void reduce(LongWritable key, Iterator<Text> values,Context context)
        throws IOException {
        while (values.hasNext()) {
             try {
                Put put = new Put(Bytes.toBytes(key.toString()));
                put.add("cf1".getBytes(), "c1".getBytes(), values.next().getBytes());
                context.write(null, put);
             } catch (InterruptedException e) {
                 // TODO Auto-generated catch block
                  e.printStackTrace();
             }
       }
   }
 }

你对此有什么经验吗?请告诉我如何解决它。

1 个答案:

答案 0 :(得分:1)

我自己得到了解决方案。

在我的reduce函数之前插入注释@Override并更改reduce函数的第二个参数,如下所示: @覆盖 public void reduce(LongWritable key, Iterable 值,Context上下文)