我想将JSON OR CSV加载到HBASE中而不使用任何mapreduce程序以及HIVEQL / pig支持,是否可能以及哪一个更有效的hive-hbase或mapreduce-hbase。
答案 0 :(得分:1)
我使用Perl脚本来做到这一点;
这是我的(perl生成的)JSON文件
{"c3":"c","c4":"d","c5":"tim","c2":"b","c6":"andrew","c1":"a"},"CURRENTLY20140131":{"c2":"tim2","c1":"bill2"},"THERE20140131"::{"c3":"c","c4":"d","c9":"bill2","c10":"tim2","c2":"b","c6":"andrew","c7":"bill","c5":"tim","c1":"a","c8":"tom"},"TODAY20140131":{"c2":"bill","c1":"tom"}}
我在STRING上进行分片,有多个列,具体取决于谁/什么引用了关键对象。
use strict;
use warnings;
use Data::Dumper;
use JSON::XS qw(encode_json decode_json);
use File::Slurp qw(read_file write_file);
my %words = ();
my $debug = 0;
sub ReadHash {
my ($filename) = @_;
my $json = read_file( $filename, { binmode => ':raw' } );
%words = %{ decode_json $json };
}
# Main Starts here
ReadHash("Save.json");
foreach my $key (keys %words)
{
printf("put 'test', '$key',");
my $cnt=0;
foreach my $key2 ( keys %{ $words{$key} } ) {
my $val = $words{$key}{$key2};
print "," if $cnt>0;
printf("'cf:$key2', '$val'");
++$cnt;
}
print "\n";
}
生成Hbase命令,然后执行它们。
Alternativly - 我会看看happybase(Python),它也可以非常快速地加载大数据集。
希望这有帮助
这应该产生像.....的输出
put 'test', 'WHERE20140131','cf:c2', 'bill2','cf:c1', 'tim2'
put 'test', 'OMAN20140131','cf:c3', 'c','cf:c4', 'd','cf:c5', 'tim','cf:c2', 'b','cf:c1', 'a','cf:c6', 'andrew'
put 'test', 'CURRENTLY20140131','cf:c2', 'tim2','cf:c1', 'bill2'
答案 1 :(得分:0)
也许您可以参考批量加载。链接在这里。 bulk loading