我一直在尝试编写自己的协处理器,使用prePut挂钩创建二级索引。首先,我一直在努力让prePut协处理器工作。到目前为止,我可以将协处理器添加到传递给它的put对象。我发现我无法让协处理器写入与传入的put对象写入的行分开的行。显然要创建一个二级索引,我需要弄清楚这个。
以下是我的协处理器的代码,但它不起作用
是的,所有表都存在,'colfam1'也存在。
HBase版本:来自Cloudera CDH4的HBase 0.92.1-cdh4.1.2
有谁知道问题是什么?
@Override
public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e,final Put put, final WALEdit edit, final boolean writeToWAL) throws IOException {
KeyValue kv = new KeyValue(Bytes.toBytes("COPROCESSORROW"), Bytes.toBytes("colfam1"),Bytes.toBytes("COPROCESSOR: "+System.currentTimeMillis()),Bytes.toBytes("IT WORKED"));
put.add(kv);
}
我收到以下错误:
ERROR: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: IOException: 1 time, servers with issues:
更新:
我已将我的协处理器修改为以下内容,但我仍然收到错误消息。现在写了post-Put(二级索引),但仍然存在超时错误 该区域的整个表格崩溃也要求我重新启动该区域。有时区域重启不起作用,整个区域(所有表) 已损坏,需要重建服务器。
我不知道为什么......!?
@Override
public void start(CoprocessorEnvironment env) throws IOException {
LOG.info("(start)");
pool = new HTablePool(env.getConfiguration(), 10);
}
@Override
public void postPut(final ObserverContext<RegionCoprocessorEnvironment> observerContext,final Put put,final WALEdit edit,final boolean writeToWAL) throws IOException {
byte[] tableName = observerContext.getEnvironment().getRegion().getRegionInfo().getTableName();
//not necessary though if you register the coprocessor for the specific table , SOURCE_TBL
if (!Bytes.equals(tableName, Bytes.toBytes(SOURCE_TABLE)))
return;
try {
LOG.info("STARTING postPut");
HTableInterface table = pool.getTable(Bytes.toBytes(INDEX_TABLE));
LOG.info("TURN OFF AUTOFLUSH");
table.setAutoFlush(false);
//create row
LOG.info("Creating new row");
byte [] rowkey = Bytes.toBytes("COPROCESSOR ROW");
Put indexput = new Put(rowkey);
indexput.add(Bytes.toBytes ( "data"), Bytes.toBytes("CP: "+System.currentTimeMillis()), Bytes.toBytes("IT WORKED!"));
LOG.info("Writing to table");
table.put(indexput);
LOG.info("flushing commits");
table.flushCommits();
LOG.info("close table");
table.close();
} catch ( IllegalArgumentException ex) {
//handle excepion.
}
}
@Override
public void stop(CoprocessorEnvironment env) throws IOException {
LOG.info("(stop)");
pool.close();
}
以下是区域服务器日志:(注意我的日志记录注释)
2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: STARTING postPut
2013-01-30 19:30:39,754 INFO my.package.MyCoprocessor: TURN OFF AUTOFLUSH
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Creating new row
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: Writing to table
2013-01-30 19:30:39,755 INFO my.package.MyCoprocessor: flushing commits
2013-01-30 19:31:39,813 WARN org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: Failed all from region=test_table,,1359573731255.d41b77b31fafa6502a8f09db9c56b9d8., hostname=node01, port=60020
java.util.concurrent.ExecutionException: java.net.SocketTimeoutException: Call to node01/<private_ip>:60020 failed on socket timeout exception: java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/<private_ip>:56390 remote=node01/<private_ip>:60020]
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1557)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:949)
at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.flushCommits(HTablePool.java:449)
at my.package.MyCoprocessor.postPut(MyCoprocessor.java:81)
at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postPut(RegionCoprocessorHost.java:682)
at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1901)
at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1742)
at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3102)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1345)
解决:我试图在我的coprocoessor中写入协处理器正在处理的同一个表:简而言之,当我写一个单元格时,CP写了一个单元格,导致CP再次触发并写入另一个等等并且。我通过行检查来阻止它,b4写CP行以防止这个循环。
答案 0 :(得分:5)
以下是我们如何在Hbase中使用Coprocessors创建二级索引的代码片段。对你有帮助。
public class TestCoprocessor extends BaseRegionObserver{
private HTablePool pool = null;
private final static String INDEX_TABLE = "INDEX_TBL";
private final static String SOURCE_TABLE = "SOURCE_TBL";
@Override
public void start(CoprocessorEnvironment env) throws IOException {
pool = new HTablePool(env.getConfiguration(), 10);
}
@Override
public void postPut(
final ObserverContext<RegionCoprocessorEnvironment> observerContext,
final Put put,
final WALEdit edit,
final boolean writeToWAL)
throws IOException {
byte[] table = observerContext.getEnvironment(
).getRegion().getRegionInfo().getTableName();
// Not necessary though if you register the coprocessor
// for the specific table, SOURCE_TBL
if (!Bytes.equals(table, Bytes.toBytes(SOURCE_TABLE))) {
return;
}
try {
final List<KeyValue> filteredList = put.get(
Bytes.toBytes ( "colfam1"), Bytes.toBytes(" qaul"));
filteredList.get( 0 ); //get the column value
// get the values
HTableInterface table = pool.getTable(Bytes.toBytes(INDEX_TABLE));
// create row key
byte [] rowkey = mkRowKey () //make the row key
Put indexput = new Put(rowkey);
indexput.add(
Bytes.toBytes( "colfam1"),
Bytes.toBytes(" qaul"),
Bytes.toBytes(" value.."));
table.put(indexput);
table.close();
} catch ( IllegalArgumentException ex) {
// handle excepion.
}
}
@Override
public void stop(CoprocessorEnvironment env) throws IOException {
pool.close();
}
}
要在SOURCE_BL上注册上述协处理器,请转到hbase shell并按照以下步骤操作
答案 1 :(得分:-2)
现在已经在HBase中构建了二级索引。看一下这个blog entry。 HBase中不需要使用CoProcessors。