如何在Spark-SQL查询中修复NegativeArraySizeException

时间:2019-03-25 13:37:11

标签: apache-spark apache-spark-sql

我正在尝试运行一个复杂的spark查询并获取NegativeArraySizeException

插入正在运行的查询:

INSERT INTO
  table VT_BM_LU_FACT_NEW
SELECT
  COUNTRY.rpt_country_cd,
  -1 * FACT.fact_id_nr AS auto_c01,
  'PRG_NET_PREMIUM_AM_USD' AS FACT_CD_TX,
  FACT.fact_descr_tx,
  FACT.fact_sort_order_id_nr,
  PRODUCT.benchmarking_product_cd,
  PRODUCT.benchmarking_product_desc,
  Cast(FACT.conformity_id + 999000000 AS INT) AS auto_c07,
  FACT.conformity_desc,
  FACT.currency_conv_in,
  FACT.insurance_structure_grp,
  FACT.qualitative_in,
  FACT.quantitative_in,
  FACT.exposure_priority_tx,
  FACT.numerator_fact_id,
  FACT.denominator_fact_id,
  FACT.scale_factor,
  FACT.conformity_sort_order,
  'N',
  FACT.metric_format_cd,
  FACT.l2_field_display_in
FROM(
    SELECT
      *
    FROM
      vt_bm_lu_fact_new
    WHERE
      fact_id_nr = 8
      AND Upper(Trim(rpt_country_cd)) = Upper(Trim('US'))
  ) AS FACT,
  (
    SELECT
      DISTINCT rpt_country_cd
    FROM
      vt_bm_lu_fact_new
    WHERE
      NOT(
        Upper(Trim(rpt_country_cd)) IN(
          Upper(Trim('US')),
          Upper(Trim('BM')),
          Upper(Trim('GB'))
        )
      )
  ) AS COUNTRY,
  (
    SELECT
      DISTINCT benchmarking_product_cd,
      benchmarking_product_desc
    FROM
      vt_bm_lu_fact_new
    WHERE
      NOT(
        Upper(Trim(rpt_country_cd)) IN(Upper(Trim('US')), Upper(Trim('GB')))
      )
  ) AS PRODUCT;

错误堆栈跟踪:

java.lang.NegativeArraySizeException
at org.apache.spark.rdd.CartesianRDD.getPartitions(CartesianRDD.scala:60)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250)
at scala.Option.getOrElse(Option.scala:121)

0 个答案:

没有答案