如何使用Pig和HBaseStorage存储到HBase中

时间:2014-01-30 23:32:44

标签: hadoop hbase apache-pig

在HBase shell中,我通过以下方式创建了我的表:

create 'pig_table','cf'

在Pig中,以下是我希望存储到pig_table的别名的结果:

DUMP B;

生成包含6个字段的元组:

(D1|30|2014-01-01 13:00,D1,30,7.0,2014-01-01 13:00,DEF)
(D1|30|2014-01-01 22:00,D1,30,1.0,2014-01-01 22:00,JKL)
(D10|20|2014-01-01 11:00,D10,20,4.0,2014-01-01 11:00,PQR)
...

第一个字段是第2,第3和第5个字段的串联,将用作HBase rowkey。

但是

STORE B INTO 'hbase://pig_table' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage ( 'cf:device_id,cf:cost,cf:hours,cf:start_time,cf:code')

结果:

`Failed to produce result in "hbase:pig_table"

日志正在给我:

Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.pig.data.DataByteArray
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.objToBytes(HBaseStorage.java:924)
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:875)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:551)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.write(WrappedReducer.java:99)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapReduce$Reduce.runPipeline(PigGenericMapReduce.java:468)
... 11 more

我的语法有什么问题?

1 个答案:

答案 0 :(得分:2)

似乎HBaseStorage不会自动将元组的数据字段转换为chararray,这在将其存储到HBase之前是必要的。我只是简单地将它们铸造出来:

C = FOREACH B {
    GENERATE
    (chararray)$0
    ,(chararray)$1
    ,(chararray)$2
    ,(chararray)$3
    ,(chararray)$4
    ,(chararray)$5
    ,(chararray)$6
    ;
}

STORE B INTO 'hbase://pig_table' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage ( 'cf:device_id,cf:cost,cf:hours,cf:start_time,cf:code')