我正在编写一个MR作业,它将HBase表作为输入并转储到HDFS文件。我使用MultipleInputs类(来自Hadoop),因为我打算采用多个数据源。我写了一个非常简单的MR程序(参见下面的源代码)。不幸的是,我遇到了以下错误:
java.lang.ClassCastException:org.apache.hadoop.io.LongWritable无法强制转换为org.apache.hadoop.hbase.io.ImmutableBytesWritable
我运行伪分布式hadoop(1.2.0)和伪分布式HBase(0.95.1-hadoop1)。
以下是完整的源代码:一个有趣的事情是:如果我注释掉多输入行“MultipleInputs.addInputPath(job,inputPath1,TextInputFormat.class,TableMap.class);”,那么MR作业运行正常。
public class MixMR {
public static class TableMap extends TableMapper<Text, Text> {
public static final byte[] CF = "cf".getBytes();
public static final byte[] ATTR1 = "c1".getBytes();
public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {
String key = Bytes.toString(row.get());
String val = new String(value.getValue(CF, ATTR1));
context.write(new Text(key), new Text(val));
}
}
public static class Reduce extends Reducer <Object, Text, Object, Text> {
public void reduce(Object key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
String ks = key.toString();
for (Text val : values){
context.write(new Text(ks), val);
}
}
}
public static void main(String[] args) throws Exception {
Path inputPath1 = new Path(args[0]);
Path outputPath = new Path(args[1]);
String tableName1 = "test";
Configuration config = HBaseConfiguration.create();
Job job = new Job(config, "ExampleRead");
job.setJarByClass(MixMR.class); // class that contains mapper
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
scan.addFamily(Bytes.toBytes("cf"));
TableMapReduceUtil.initTableMapperJob(
tableName1, // input HBase table name
scan, // Scan instance to control CF and attribute selection
TableMap.class, // mapper
Text.class, // mapper output key
Text.class, // mapper output value
job);
job.setReducerClass(Reduce.class); // reducer class
job.setOutputFormatClass(TextOutputFormat.class);
// inputPath1 here has no effect for HBase table
MultipleInputs.addInputPath(job, inputPath1, TextInputFormat.class, TableMap.class);
FileOutputFormat.setOutputPath(job, outputPath);
job.waitForCompletion(true);
}
}
答案 0 :(得分:0)
我得到了答案: 在以下语句中:将TextInputFormat.class替换为TableInputFormat.class
MultipleInputs.addInputPath(job,inputPath1,TextInputFormat.class,TableMap.class);