需要多少个Hive动态分区?

时间:2015-09-30 07:13:59

标签: azure hadoop hive hdinsight

我正在运行一项大型工作,在不到两年的不规则时间内将大约55个样本流(标记)(每个记录一个样本)合并为15分钟的平均值。原始数据集中23k流中有大约11亿条记录,这55条流组成了大约3300万条记录。 我计算了一个15分钟的索引并按此分组以获得平均值,但是我似乎已经超过了我的蜂巢工作的最大动态分区,尽管它达到了20k。我可以进一步增加它,但它已经需要一段时间才能失败(大约6个小时,虽然我通过减少要考虑的流的数量将它减少到2),而我实际上并不知道如何计算我真的有多少需要。

以下是代码:

SET hive.exec.dynamic.partition = true; 
SET hive.exec.dynamic.partition.mode = nonstrict; 
SET hive.exec.max.dynamic.partitions=50000;
SET hive.exec.max.dynamic.partitions.pernode=20000; 


DROP TABLE IF EXISTS sensor_part_qhr; 

 CREATE TABLE sensor_part_qhr (
    tag  STRING,
    tag0 STRING,
    tag1 STRING,
    tagn_1  STRING,
    tagn  STRING,

    timestamp  STRING,
    unixtime INT,
    qqFr2013 INT,

    quality  INT,
    count  INT,
    stdev  double,
    value    double
)  
PARTITIONED BY (bld STRING);

INSERT INTO TABLE sensor_part_qhr
PARTITION (bld) 
SELECT  tag,
        min(tag), 
        min(tag0), 
        min(tag1), 
        min(tagn_1), 
        min(tagn),

        min(timestamp),
        min(unixtime),  
        qqFr2013,

        min(quality),
    count(value),
    stddev_samp(value), 
        avg(value)
FROM    sensor_part_subset     
WHERE   tag1='Energy'
GROUP BY tag,qqFr2013;

以下是错误消息:

    Error during job, obtaining debugging information...
    Examining task ID: task_1442824943639_0044_m_000008 (and more) from job job_1442824943639_0044
    Examining task ID: task_1442824943639_0044_r_000000 (and more) from job job_1442824943639_0044

    Task with the most failures(4): 
    -----
    Task ID:
      task_1442824943639_0044_r_000000

    URL:
      http://headnodehost:9014/taskdetails.jsp?jobid=job_1442824943639_0044&tipid=task_1442824943639_0044_r_000000
    -----
    Diagnostic Messages for this Task:
    Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to: 20000
        at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
        at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
    Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException:

    [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. 
    The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. 
    Maximum was set to: 20000

        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:747)
        at org.apache.hadoop.hive.ql.exec.FileSinkOperator.startGroup(FileSinkOperator.java:829)
        at org.apache.hadoop.hive.ql.exec.Operator.defaultStartGroup(Operator.java:498)
        at org.apache.hadoop.hive.ql.exec.Operator.startGroup(Operator.java:521)
        at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:232)
        ... 7 more

    Container killed by the ApplicationMaster.
    Container killed on request. Exit code is 137
    Container exited with a non-zero exit code 137


    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
    MapReduce Jobs Launched: 
    Job 0: Map: 520  Reduce: 140   Cumulative CPU: 7409.394 sec   HDFS Read: 0 HDFS Write: 393345977 SUCCESS
    Job 1: Map: 9  Reduce: 1   Cumulative CPU: 87.201 sec   HDFS Read: 393359417 HDFS Write: 0 FAIL
    Total MapReduce CPU Time Spent: 0 days 2 hours 4 minutes 56 seconds 595 msec

任何人都可以就如何计算这样的工作可能需要多少动态节点提出一些想法?

或许我应该以不同的方式做这件事?我在Azure HDInsight上运行Hive 0.13。

更新

  • 更正了上面的一些数字。
  • 将其减少为在211k记录上运行的3个流,最后 成功了。
  • 开始尝试,将每个节点的分区减少到5k,然后减少1k,它仍然成功。

所以我不再被阻止,但我想我需要数百万个节点一次完成整个数据集(这就是我真正想做的事情)。

1 个答案:

答案 0 :(得分:1)

在sensor_part_qhr中插入时,SELECT语句中的列之间的动态分区列必须为specified last