我是hadoop的新手。 我有一个MapReduce作业,应该从Hdfs获取输入并将reducer的输出写入Hbase。我没有找到任何好的例子。
这是代码,运行此示例的错误是map中的Type mismatch,期望ImmutableBytesWritable收到IntWritable。
Mapper类
public static class AddValueMapper extends Mapper < LongWritable,
Text, ImmutableBytesWritable, IntWritable > {
/* input <key, line number : value, full line>
* output <key, log key : value >*/
public void map(LongWritable key, Text value,
Context context)throws IOException,
InterruptedException {
byte[] key;
int value, pos = 0;
String line = value.toString();
String p1 , p2 = null;
pos = line.indexOf("=");
//Key part
p1 = line.substring(0, pos);
p1 = p1.trim();
key = Bytes.toBytes(p1);
//Value part
p2 = line.substring(pos +1);
p2 = p2.trim();
value = Integer.parseInt(p2);
context.write(new ImmutableBytesWritable(key),new IntWritable(value));
}
}
减速机等级
public static class AddValuesReducer extends TableReducer<
ImmutableBytesWritable, IntWritable, ImmutableBytesWritable> {
public void reduce(ImmutableBytesWritable key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
long total =0;
// Loop values
while(values.iterator().hasNext()){
total += values.iterator().next().get();
}
// Put to HBase
Put put = new Put(key.get());
put.add(Bytes.toBytes("data"), Bytes.toBytes("total"),
Bytes.toBytes(total));
Bytes.toInt(key.get()), total));
context.write(key, put);
}
}
我的HDFS工作类似,工作正常。
编辑于18-06-2013 。大学项目两年前成功完成。对于作业配置(驱动程序部分),请检查正确答案。
答案 0 :(得分:6)
以下是解决问题的代码
HBaseConfiguration conf = HBaseConfiguration.create();
Job job = new Job(conf,"JOB_NAME");
job.setJarByClass(yourclass.class);
job.setMapperClass(yourMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Intwritable.class);
FileInputFormat.setInputPaths(job, new Path(inputPath));
TableMapReduceUtil.initTableReducerJob(TABLE,
yourReducer.class, job);
job.setReducerClass(yourReducer.class);
job.waitForCompletion(true);
class yourMapper extends Mapper<LongWritable, Text, Text,IntWritable> {
//@overide map()
}
class yourReducer
extends
TableReducer<Text, IntWritable,
ImmutableBytesWritable>
{
//@override reduce()
}
答案 1 :(得分:1)
不确定为什么HDFS版本有效:通常你必须设置作业的输入格式,而FileInputFormat是一个抽象类。也许你留下了一些线?比如
job.setInputFormatClass(TextInputFormat.class);
答案 2 :(得分:1)
答案 3 :(得分:0)
public void map(LongWritable key, Text value,
Context context)throws IOException,
InterruptedException {
将其更改为immutableBytesWritable
,intwritable
。
我不确定......希望它有效