树节点异常运行spark提交时

时间:2018-05-31 19:32:43

标签: apache-spark apache-spark-sql

18/05/31 18:33:45 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
Exception in thread "main" org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange hashpartitioning(pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, 200)
+- *HashAggregate(keys=[loc_st_zsp_ori_tran_prem_amt#1654, pol_sym_cd#5, loc_st_prem_amt#1526, pol_nbr#1, loc_st_abbr#50, pol_eff_dt#2, pol_modu_nbr#4, pol_expi_dt#3], functions=[], output=[pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, pol_modu_nbr#4, pol_sym_cd#5, pol_expi_dt#3, loc_st_prem_amt#1526, loc_st_zsp_ori_tran_prem_amt#1654])
   +- *HashAggregate(keys=[loc_st_zsp_ori_tran_prem_amt#1654, pol_sym_cd#5, loc_st_prem_amt#1526, pol_nbr#1, loc_st_abbr#50, pol_eff_dt#2, pol_modu_nbr#4, pol_expi_dt#3], functions=[], output=[loc_st_zsp_ori_tran_prem_amt#1654, pol_sym_cd#5, loc_st_prem_amt#1526, pol_nbr#1, loc_st_abbr#50, pol_eff_dt#2, pol_modu_nbr#4, pol_expi_dt#3])
      +- *Project [pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, pol_modu_nbr#4, pol_sym_cd#5, pol_expi_dt#3, loc_st_prem_amt#1526, loc_st_zsp_ori_tran_prem_amt#1654]
         +- SortMergeJoin [pol_nbr#1, pol_eff_dt#2], [pol_nbr#2890, pol_eff_dt#2891], LeftOuter
            :- *Sort [pol_nbr#1 ASC NULLS FIRST, pol_eff_dt#2 ASC NULLS FIRST], false, 0
            :  +- Exchange hashpartitioning(pol_nbr#1, pol_eff_dt#2, 200)
            :     +- *Project [pol_nbr#1, pol_eff_dt#2, pol_expi_dt#3, pol_modu_nbr#4, pol_sym_cd#5, loc_st_abbr#50, loc_st_prem_amt#1526, loc_st_zsp_ori_tran_prem_amt#1654]
            :        +- Window [sum(RI_filter_drvd_prem_amt#656) windowspecdefinition(pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS loc_st_zsp_ori_tran_prem_amt#1654], [pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50]
            :           +- *Project [pol_nbr#1, pol_eff_dt#2, pol_expi_dt#3, pol_modu_nbr#4, pol_sym_cd#5, loc_st_abbr#50, RI_filter_drvd_prem_amt#656, loc_st_prem_amt#1526]
            :              +- Window [sum(drvd_prem_amt#83) windowspecdefinition(pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS loc_st_prem_amt#1526], [pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50]
            :                 +- *Project [pol_nbr#1, pol_eff_dt#2, pol_expi_dt#3, pol_modu_nbr#4, pol_sym_cd#5, loc_st_abbr#50, drvd_prem_amt#83, CASE WHEN (Contains(acty_src_cd#60, RI) || Contains(acty_src_cd#60, R2)) THEN cast(drvd_prem_amt#83 as decimal(38,4)) ELSE 0.0000 END AS RI_filter_drvd_prem_amt#656]
            :                    +- SortMergeJoin [pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50], [pol_nbr#347, pol_eff_dt#350, loc_st_abbr#360], LeftOuter
            :                       :- *Sort [pol_nbr#1 ASC NULLS FIRST, pol_eff_dt#2 ASC NULLS FIRST, loc_st_abbr#50 ASC NULLS FIRST], false, 0
            :                       :  +- Exchange hashpartitioning(pol_nbr#1, pol_eff_dt#2, loc_st_abbr#50, 200)
            :                       :     +- HiveTableScan [pol_nbr#1, pol_eff_dt#2, pol_expi_dt#3, pol_modu_nbr#4, pol_sym_cd#5, loc_st_abbr#50, acty_src_cd#60, drvd_prem_amt#83], HiveTableRelation `zna_stg_expected_loss`.`stg_data_curated_expected_loss_gl_cmps_covg_lvl_1`, org.apache.hadoop.hive.ql.io.orc.OrcSerde, [pol_nbr#1, pol_eff_dt#2, pol_expi_dt#3, pol_modu_nbr#4, pol_sym_cd#5, pol_prcg_ofc_cd#6, pol_co_nbr_za#7, pol_acty_src_cd#8, pol_dac_cd#9, pty_st_abbr#10, pty_zip_cd#11, pol_orgl_incp_dt#12, pol_renl_pol_nbr#13, pol_renl_pol_eff_dt#14, pol_renl_modu_nbr#15, pol_can_dt#16, pol_new_renl_cd#17, pol_mp_cd#18, pol_sic_cd#19, pol_undg_pgm_cd#20, pol_retro_cd#21, pol_exc_cd#22, pol_prdr_nbr#23, pol_occr_lim_amt#24L, ... 89 more fields], [exclusion_flag#114], [isnotnull(exclusion_flag#114), (exclusion_flag#114 = N)]
            :                       +- *Sort [pol_nbr#347 ASC NULLS FIRST, pol_eff_dt#350 ASC NULLS FIRS

0 个答案:

没有答案