我使用Janusgraph 0.2.0和Hbase和lucene作为索引后端。我在使用MapReduceIndexManagement重新编制索引时面临问题
下面是错误跟踪 -
Error=java.util.concurrent.ExecutionException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.janusgraph.hadoop.MapReduceIndexManagement$FailedJobFuture.get(MapReduceIndexManagement.java:298)
at org.janusgraph.hadoop.MapReduceIndexManagement$FailedJobFuture.get(MapReduceIndexManagement.java:268)
at com.inn.foresight.core.generic.utils.CustomDataSource.addMapreduceReIndex(CustomDataSource.java:188)
at com.inn.foresight.core.generic.utils.CustomDataSource.main(CustomDataSource.java:163)
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:354)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:159)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:61)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:211)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:799)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:193)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:89)
at org.apache.hadoop.hbase.client.MetaScanner.allTableRegions(MetaScanner.java:324)
at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:90)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
at org.janusgraph.hadoop.formats.hbase.HBaseBinaryInputFormat.getSplits(HBaseBinaryInputFormat.java:58)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.janusgraph.hadoop.scan.HadoopScanRunner.runJob(HadoopScanRunner.java:138)
at org.janusgraph.hadoop.MapReduceIndexManagement.updateIndex(MapReduceIndexManagement.java:187)
... 2 more
我的janusgraph属性在
之下 storage.backend = hbase
storage.hostname = localhost
storage.port = 2181
storage.hbase.ext.hbase.zookeeper.property.clientPort = 2181
storage.hbase.ext.zookeeper.znode.parent = /hbase-unsecure
query.fast-property = true
storage.hbase.table = Master9
storage.read-time = 200000
cache.db-cache = true
cache.db-cache-clean-wait = 20
cache.db-cache-time = 180000
cache.db-cache-size = 0.5
index.search.backend=lucene
index.search.directory=/home/ist/jIndexFolder
这是我的代码
JGraphUtil jGraphUtil = new JGraphUtil();
JanusGraph jGraph = jGraphUtil.openGraphInstance();
JanusGraphManagement management;
JanusGraphTransaction newTransaction;
jGraphUtil.closeOtherInstances();
newTransaction = jGraph.newTransaction();
management = jGraph.openManagement();
JanusGraphIndex graphIndex = management.getGraphIndex("bySrcEName");
MapReduceIndexManagement mrim = new MapReduceIndexManagement(jGraph);
try {
mrim.updateIndex(graphIndex, SchemaAction.REINDEX).get();
} catch (InterruptedException | ExecutionException | BackendException e) {
System.out.println("Error=" + Utils.getStackTrace(e));
}
请检查一下,让我知道如何在JanusGraph中执行reindex。任何使用MapReduceIndexManagement重新索引的例子都将受到赞赏