无法将TotalOrderPartitioner与Hive一起使用:无法读取分区文件

时间:2016-03-30 07:34:23

标签: hadoop mapreduce hive hbase totalorderpartitioner

我们正在尝试使用生成HBase Hfiles来从Hive批量加载。我们的主要问题是使用

org.apache.hadoop.mapred.lib.TotalOrderPartitioner;

找不到自定义分区文件:

java.lang.IllegalArgumentException: Can't read partitions file

进一步详情:

从Hive表创建自定义分区程序文件:

CREATE EXTERNAL TABLE netezza.ais_lowres_mmsi_range_keys(hbase_key_range_start string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveNullValueSequenceFileOutputFormat'
LOCATION '/tmp/ais_lowres_mmsi_range_keys';

INSERT OVERWRITE TABLE netezza.ais_lowres_mmsi_range_keys SELECT r_start FROM tmp_rows ORDER BY r_start;



# The tmp_rows table holds the partition splits of our current HBase table

# Table content is copied to a file as  per: https://cwiki.apache.org/confluence/display/Hive/HBaseBulkLoad
hadoop fs -cp /tmp/ais_lowres_mmsi_range_keys/* /tmp/ais_lowres_mmsi_range_keys_list

# Hive and HBase jars are added
ADD JAR /usr/hdp/2.3.2.0-2950/hive/lib/hive-hbase-handler-1.2.1.2.3.2.0-2950.jar;
ADD JAR /usr/hdp/2.3.2.0-2950/hbase/lib/hbase-server-1.1.2.2.3.2.0-2950.jar;
ADD JAR /usr/hdp/2.3.2.0-2950/hbase/lib/hbase-common-1.1.2.2.3.2.0-2950.jar;
ADD JAR /usr/hdp/2.3.2.0-2950/hbase/lib/hbase-client-1.1.2.2.3.2.0-2950.jar;
ADD JAR /usr/hdp/2.3.2.0-2950/hbase/lib/hbase-protocol-1.1.2.2.3.2.0-2950.jar;
SET hive.aux.jars.path = /tmp/hive-hbase-handler-1.2.1.2.3.2.0-2950.jar,/tmp/hbase-server-1.1.2.2.3.2.0-2950.jar,/tmp/hbase-client-1.1.2.2.3.2.0-2950.jar,/tmp/hbase-common-1.1.2.2.3.2.0-2950.jar,/tmp/hbase-protocol-1.1.2.2.3.2.0-2950.jar;

SET hive.execution.engine=mr;
SET mapreduce.job.reduces=$((num_range+1)); # The number of reducers is set to the number of partition splits +1
SET hive.mapred.partitioner=org.apache.hadoop.mapred.lib.TotalOrderPartitioner;
SET total.order.partitioner.natural.order=false;
SET total.order.partitioner.path=/tmp/ais_lowres_mmsi_range_keys_list;        
SET hfile.compression=gz;

INSERT OVERWRITE TABLE tmp_table 
SELECT [cols] FROM ais_lowres_mmsi_distinct
CLUSTER BY hbase_key;

产生以下错误:

######################################################################
Starting Job = job_1458218583243_0631, Tracking URL = http://osl5303.cm.cluster:8088/proxy/application_1458218583243_0631/
Kill Command = /usr/hdp/2.3.2.0-2950/hadoop/bin/hadoop job  -kill job_1458218583243_0631
Hadoop job information for Stage-1: number of mappers: 19; number of reducers: 49
2016-03-30 08:19:39,534 Stage-1 map = 0%,  reduce = 0%
2016-03-30 08:19:55,084 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_1458218583243_0631 with errors
Error during job, obtaining debugging information...
Examining task ID: task_1458218583243_0631_m_000009 (and more) from job job_1458218583243_0631
Examining task ID: task_1458218583243_0631_m_000017 (and more) from job job_1458218583243_0631
Examining task ID: task_1458218583243_0631_m_000008 (and more) from job job_1458218583243_0631
Examining task ID: task_1458218583243_0631_m_000001 (and more) from job job_1458218583243_0631
Examining task ID: task_1458218583243_0631_m_000008 (and more) from job job_1458218583243_0631
Examining task ID: task_1458218583243_0631_m_000003 (and more) from job job_1458218583243_0631

Task with the most failures(4): 
-----
Task ID:
  task_1458218583243_0631_m_000012

URL:
  http://osl5303.cm.cluster:8088/taskdetails.jsp?jobid=job_1458218583243_0631&tipid=task_1458218583243_0631_m_000012
-----
Diagnostic Messages for this Task:
Error: java.lang.IllegalArgumentException: Can't read partitions file
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:116)
    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
    at org.apache.hadoop.mapred.MapTask$OldOutputCollector.<init>(MapTask.java:592)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.FileNotFoundException: File file:/grid/3/hadoop/yarn/local/usercache/ketot/appcache/application_1458218583243_0631/container_e22_1458218583243_0631_01_000086/_partition.lst does not exist
    at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822)
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1752)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:301)
    at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:88)
    ... 10 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 19  Reduce: 49   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
#########################################################################################

似乎忽略了指定的自定义路径。此外,TotalOrderPartitioner查找的路径在本地文件系统上并且不存在。有什么建议?

1 个答案:

答案 0 :(得分:2)

设置total.order.partitioner.path错误,TotalOrderParitioner尝试使用默认值作为结果。这似乎没有在任何地方提到,我不得不仔细查看源代码来解决它!

它已经更新,现在是:

mapreduce.totalorderpartitioner.path

有关已弃用的属性的完整列表,请参阅https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/DeprecatedProperties.html