使用Hbase MapReduce将文件中的数据加载到表中

时间:2012-09-12 10:12:18

标签: java mapreduce hbase

我需要从位于HDFS中的文件加载数据,并使用Hbase Map Reduce将其加载到Hbase表中。我的csv文件仅包含列限定符的值,如下所示:

现在在我的Hbase表中如何从mapReduce程序加载这个值。以及如何自动生成RowId。

    Class:


    public class SampleExample {

          private static final String NAME = "SampleExample "; //class Name

          static class Uploader extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> 
          {
            private long statuspoint = 100;
            private long count = 0;
            @Override
            public void map(LongWritable key, Text line, Context context)
            throws IOException {
              String [] values = line.toString().split(",");
                      /* How to read values into columnQualifier and how to generate row id */
         // put function-------------------
                               try {
                context.write(new ImmutableBytesWritable(row), put);
              } catch (InterruptedException e) {
                e.printStackTrace();
              }
              if(++count % statuspoint == 0) {
                context.setStatus("Emitting Put " + count);
              }
            }
          }
      public static Job configureJob(Configuration conf, String [] args)
          throws IOException {

                                   }
        }

错误:

12/09/17 05:23:30 INFO mapred.JobClient: Task Id : attempt_201209041554_0071_m_000000_0, Status : FAILED
java.io.IOException: Type mismatch in value from map: expected org.apache.hadoop.io.Writable, recieved org.apache.hadoop.hbase.client.Put
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:1019)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:691)
        at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
        at com.hbase.Administration$Uploader.map(HealthAdministration.java:51)
        at com.hbase.Administration$Uploader.map(HealthAdministration.java:1)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)

任何人都可以帮助我,我无法想象,我们如何将价值观读入资格赛。

3 个答案:

答案 0 :(得分:1)

String stringLine = line.toString();
StringTokenizer stringTokenizer = new StringTokenizer(line, "\t");`

Put put = new Put(key.get());
put.add(family, column1,stringTokenizer.nextToken().getBytes());
put.add(family, column2,stringTokenizer.nextToken().getBytes());
put.add(family, column3,stringTokenizer.nextToken().getBytes());
put.add(family, column4,stringTokenizer.nextToken().getBytes());

try {
    context.write(new ImmutableBytesWritable(row), put);
} catch (InterruptedException e) {
    e.printStackTrace();
}

答案 1 :(得分:0)

请更改您的地图&amp;减少如下。 在Map中仅对行ID进行工作,并将此工作的rowID和Line(原样)传递给reducer

map{
  byte[] row=Bytes.toBytes(key.get());
  try {
            context.write(new ImmutableBytesWritable(row),line);
          } catch (InterruptedException e) {
            e.printStackTrace();
          }
    }

减少变更

@Override     
reduce (ImmutableBytesWritable row , Text line ){
String stringLine=line.toString();
StringTokenizer stringTokenizer=new StringTokenizer(line, "\t");

Put put = new Put(key.getBytes());
put.add(family, column1,stringTokenizer.nextToken().getBytes());
put.add(family, column2,stringTokenizer.nextToken().getBytes());
put.add(family, column3,stringTokenizer.nextToken().getBytes());
put.add(family, column4,stringTokenizer.nextToken().getBytes());
try {
    context.write(new ImmutableBytesWritable(row), put);
  } catch (InterruptedException e) {
    e.printStackTrace();
  }

请根据上述代码对您进行适当的更改。 因为异常是因为当我们有一个+ ve数量的reducer时,那么map函数不能写入表(或者使用put对象),所以context.write(writable,put)被转移到reduce那个有表的-name,需要写入最终输出。 希望这应该成功。否则我会写一个相同输入文件的工作代码并将其粘贴在这里

答案 2 :(得分:0)

您只需在put命令中删除+1,如下所示Put put = new Putkey.get());并删除评论 job.setNumReduceTasks(0);那肯定会有效。