Hbase:如何使用源Hbase表中的数据将列族添加到Target-Hbase表中

时间:2018-06-11 14:47:19

标签: hbase

您好我拥有5个列族的巨大Hbase表(源)。我想从源中添加一个列族到目标Hbase表以及已经有5个列族的数据

例: create table 'source' '1','2','3','4','5'(假设我们有所有列族的数据)

create table 'target' '10','20','30','40','50'(假设我们有所有列族的数据)

输出:

目标表应该包含来自源的一列以及数据。

describe 'target'

{NAME => '10'}{NAME => '20'}{NAME => '30'}{NAME => '40'}{NAME => '50'}{NAME => '5'}

1 个答案:

答案 0 :(得分:0)

您可以将CopyTable用于此目的。 它可用于复制逗号分隔的族

 private static void printUsage(final String errorMsg) {
if (errorMsg != null && errorMsg.length() > 0) {
  System.err.println("ERROR: " + errorMsg);
}
System.err.println("Usage: CopyTable [general options] [--starttime=X] [--endtime=Y] " +
    "[--new.name=NEW] [--peer.adr=ADR] <tablename>");
System.err.println();
System.err.println("Options:");
System.err.println(" rs.class     hbase.regionserver.class of the peer cluster");
System.err.println("              specify if different from current cluster");
System.err.println(" rs.impl      hbase.regionserver.impl of the peer cluster");
System.err.println(" startrow     the start row");
System.err.println(" stoprow      the stop row");
System.err.println(" starttime    beginning of the time range (unixtime in millis)");
System.err.println("              without endtime means from starttime to forever");
System.err.println(" endtime      end of the time range.  Ignored if no starttime specified.");
System.err.println(" versions     number of cell versions to copy");
System.err.println(" new.name     new table's name");
System.err.println(" peer.adr     Address of the peer cluster given in the format");
System.err.println("              hbase.zookeeper.quorum:hbase.zookeeper.client"
    + ".port:zookeeper.znode.parent");
System.err.println(" families     comma-separated list of families to copy");
System.err.println("              To copy from cf1 to cf2, give sourceCfName:destCfName. ");
System.err.println("              To keep the same name, just give \"cfName\"");
System.err.println(" all.cells    also copy delete markers and deleted cells");
System.err.println(" bulkload     Write input into HFiles and bulk load to the destination "
    + "table");
System.err.println();
System.err.println("Args:");
System.err.println(" tablename    Name of the table to copy");
System.err.println();
System.err.println("Examples:");
System.err.println(" To copy 'TestTable' to a cluster that uses replication for a 1 hour window:");
System.err.println(" $ hbase " +
    "org.apache.hadoop.hbase.mapreduce.CopyTable --starttime=1265875194289 --endtime=1265878794289 " +
    "--peer.adr=server1,server2,server3:2181:/hbase --families=myOldCf:myNewCf,cf2,cf3 TestTable ");
System.err.println("For performance consider the following general option:\n"
    + "  It is recommended that you set the following to >=100. A higher value uses more memory but\n"
    + "  decreases the round trip time to the server and may increase performance.\n"
    + "    -Dhbase.client.scanner.caching=100\n"
    + "  The following should always be set to false, to prevent writing data twice, which may produce \n"
    + "  inaccurate results.\n"
    + "    -Dmapreduce.map.speculative=false");}

来源:https://github.com/apache/hbase/blob/master/hbase-mapreduce/src/main/java/org/apache/hadoop/hbase/mapreduce/CopyTable.java