我有一个HBase表,其中我有两个列族,'i:*'表示info,'f:b'表示file:blob。 我在图像中存储图像,有些图像几乎是12MB。
我可以在java中加载/插入文件没有问题,但是一旦我尝试通过扫描f:b系列值(blob)来检索它们,我的扫描仪就会一直存在,直到它超时并且每个区域服务器我的集群按顺序死亡(我有一个20节点集群)。阻止我的扫描仪以某种方式对我的无助节点造成的这种准病毒的唯一方法就是完全放弃桌面(或者看起来如此)。
我正在使用Cloudera EDH'0.98.6-cdh5.2.0'
不幸的是我的客户只是超时,所以没有有价值的例外,我可以从我的节点日志获得的所有内容都在下面
2014-10-27 21:47:36,106 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to archive class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp
java.io.FileNotFoundException: File hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp does not exist.
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:658)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716)
at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712)
at org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath.getChildren(HFileArchiver.java:628)
at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:346)
at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:347)
at org.apache.hadoop.hbase.backup.HFileArchiver.resolveAndArchive(HFileArchiver.java:284)
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:137)
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75)
at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333)
at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254)
at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101)
at org.apache.hadoop.hbase.Chore.run(Chore.java:87)
at java.lang.Thread.run(Thread.java:745)
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.backup.HFileArchiver: Failed to complete archive of: [class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits/0000000000000000029.temp]. Those files are still in the original location, and they may slow down reads.
2014-10-27 21:47:36,129 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table
java.io.IOException: Received error when attempting to archive files ([class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/f, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/i, class org.apache.hadoop.hbase.backup.HFileArchiver$FileablePath, file:hdfs://nameservice1/hbase/data/default/RASTER/92ceb2d86662ad6d959f4cc384229e0f/recovered.edits]), cannot delete region directory.
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:148)
at org.apache.hadoop.hbase.backup.HFileArchiver.archiveRegion(HFileArchiver.java:75)
at org.apache.hadoop.hbase.master.CatalogJanitor.cleanParent(CatalogJanitor.java:333)
at org.apache.hadoop.hbase.master.CatalogJanitor.scan(CatalogJanitor.java:254)
at org.apache.hadoop.hbase.master.CatalogJanitor.chore(CatalogJanitor.java:101)
at org.apache.hadoop.hbase.Chore.run(Chore.java:87)
at java.lang.Thread.run(Thread.java:745)
2014-10-27 21:47:36,146 INFO org.apache.hadoop.hbase.master.SplitLogManager: Done splitting /hbase/splitWAL/WALs%2Finsight-staging-slave019.spadac.com%2C60020%2C1414446135179-splitting%2Finsight-staging-slave019.spadac.com%252C60020%252C1414446135179.1414446317771
这是我扫描表格的代码
try {
if (hBaseConfig == null) {
hBaseConfig = HBaseConfiguration.create();
hBaseConfig.setInt("hbase.client.scanner.timeout.period", 1200000);
hBaseConfig.set("hbase.client.keyvalue.maxsize", "0");
hBaseConfig.set("hbase.master", PROPS.get().getProperty("hbase.master"));
hBaseConfig.set("hbase.zookeeper.quorum", PROPS.get().getProperty("zks"));
hBaseConfig.set("zks.port", "2181");
table = new HTable(hBaseConfig, "RASTER");
}
Scan scan = new Scan();
scan.addColumn("f".getBytes(), "b".getBytes());
scan.addColumn("i".getBytes(), "name".getBytes());
ResultScanner scanner = table.getScanner(scan);
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
/*I NEVER EVEN GET HERE IF I SCAN FOR 'f:b'*/
CellScanner cs = rr.cellScanner();
String name = "";
byte[] fileBs = null;
while (cs.advance()) {
Cell current = cs.current();
byte[] cloneValue = CellUtil.cloneValue(current);
byte[] cloneFamily = CellUtil.cloneFamily(current);
byte[] qualBytes = CellUtil.cloneQualifier(current);
String fam = Bytes.toString(cloneFamily);
String qual = Bytes.toString(qualBytes);
if (fam.equals("i")) {
if (qual.equals("name")) {
name = Bytes.toString(cloneValue);
}
} else if (fam.equals("f") && qual.equals("b")) {
fileBs = cloneValue;
}
}
OutputStream bof = new FileOutputStream("c:\\temp\\" + name);
bof.write(fileBs);
break;
}
} catch (IOException ex) {
//removed
}
感谢 有谁知道为什么扫描大型blob可能会消灭我的集群?我确信这是愚蠢的,但是无法理解。
答案 0 :(得分:0)
看起来这就是问题
hBaseConfig.set("hbase.client.keyvalue.maxsize", "0");
我将其更改为" 50"它现在有效。