我正尝试使用以下命令将具有 4500万条记录的 tab 个单独的HDFS文件( 3.5G )加载到HBASE中
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,description:part_description part /user/sw/spark_search/part_description_data
文件摘要
45-573 Conn Circular Adapter F/M 11 POS ST 1 Port
CA3100E14S-4P-B-03 Conn Circular PIN 1 POS Crimp ST Wall Mount 1 Terminal 1 Port Automotive
我可以看到一张地图减少了作业的开始并达到5%,但是随后区域服务器崩溃并导致作业超时。 并抛出
19/06/26 14:56:31 INFO mapreduce.Job: map 0% reduce 0%
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000010_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000010_0 Timed out after 600 secs
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000004_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000004_0 Timed out after 600 secs
19/06/26 15:06:59 INFO mapreduce.Job: Task Id : attempt_1561551541629_0001_m_000003_0, Status : FAILED
AttemptID:attempt_1561551541629_0001_m_000003_0 Timed out after 600 secs
重新启动服务器后,我可以看到一些数据已经加载,如何跟踪崩溃的原因?
检查regionservers日志后,我看到的唯一错误是
2019-06-27 15:43:05,361 ERROR org.apache.hadoop.hbase.ipc.RpcServer: Unexpected throwable object
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ResultOrException$Builder.buildPartial(ClientProtos.java:29885)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ResultOrException$Builder.build(ClientProtos.java:29877)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getResultOrException(RSRpcServices.java:328)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.getResultOrException(RSRpcServices.java:319)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:789)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:716)
at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2146)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
2019-06-27 15:43:08,120 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-cdh5.14.4--1, built on 06/12/2018 10:49 GMT
但是我可以看到我有足够的可用RAM
答案 0 :(得分:2)
问题在于,您的映射器运行时间超过600秒,因此超时并死亡。将mapreduce.task.timeout
设置为0
。通常,这不会有问题,但是在您的情况下,作业将写入HBase,而不是普通的MapReduce context.write(...)
,因此MapReduce认为没有任何反应。
答案 1 :(得分:1)
问题是由于堆内存溢出引起的,cloudera设置的默认值似乎很低,将堆增加到4G后文件已成功加载