“相关标量子查询必须聚合”是什么意思?

时间:2016-11-01 09:51:25

标签: apache-spark apache-spark-sql pyspark-sql

我使用Spark 2.0。

我想执行以下SQL查询:

val sqlText = """
select
  f.ID as TID,
  f.BldgID as TBldgID,
  f.LeaseID as TLeaseID,
  f.Period as TPeriod,
  coalesce(
    (select
       f ChargeAmt
     from
       Fact_CMCharges f
     where
       f.BldgID = Fact_CMCharges.BldgID
     limit 1),
     0) as TChargeAmt1,
  f.ChargeAmt as TChargeAmt2,
  l.EFFDATE as TBreakDate
from
  Fact_CMCharges f
join
  CMRECC l on l.BLDGID = f.BldgID and l.LEASID = f.LeaseID and l.INCCAT = f.IncomeCat and date_format(l.EFFDATE,'D')<>1 and f.Period=EFFDateInt(l.EFFDATE) 
where
  f.ActualProjected = 'Lease'
except(
  select * from TT1 t2 left semi join Fact_CMCharges f2 on t2.TID=f2.ID) 
"""
val query = spark.sql(sqlText)
query.show()

coalesce中的内部语句似乎给出了以下错误:

pyspark.sql.utils.AnalysisException: u'Correlated scalar subqueries must be Aggregated: GlobalLimit 1\n+- LocalLimit 1\n

查询有什么问题?

1 个答案:

答案 0 :(得分:5)

您必须确保按定义(而不是数据)的子查询仅返回单行。否则Spark Analyzer在解析SQL语句时会抱怨。

因此,当催化剂无法通过查看SQL语句(不查看您的数据)而100%确定子查询只返回单行时,将抛出此异常。

如果您确定子查询只提供一行,则可以使用以下aggregation standard functions之一,因此Spark Analyzer很高兴:

  • first
  • avg
  • max
  • min