在Cloudera Cluster上的Spark(集群模式)中运行相对较大的查询时,我遇到了一个问题。
这是查询的一部分:
...
CASE WHEN (gender_code = 'M') THEN 1 ELSE 0 END `2114`,
CASE WHEN (gender_code IS NOT NULL AND LENGTH(TRIM(gender_code)) > 0) THEN
1 ELSE 0 END `1780`,
CASE WHEN (( gender_code = 'F'
) AND ( procedure_code between '54000' and '55920' )
) THEN 1 ELSE 0 END `4054`,
CASE WHEN (NVL(gender_code, 'U') = 'U') THEN 1 ELSE 0 END `92501`,
CASE WHEN ((getConstant("FILE_TYPE_CODE") = 'PC' AND gender_code in ('1', 'M')) OR (getConstant("FILE_TYPE_CODE") IN ('ME', 'MC', 'PC') AND gender_code = 'M')) THEN 1 ELSE 0 END `2125`,
CASE WHEN (date_of_birth is NULL) THEN 1 ELSE 0 END `92971`,
/*THIS ONE IS CAUSING ISSUE */( select first(number_of_member_first_name) from( select count (distinct x.member_first_name) as number_of_member_first_name, date_format(x.paid_date,'yyyyMM') as ym from dataset x where cast( datediff(x.date_of_service_from,x.date_of_birth)/365 as INTEGER ) > 60 group by date_format(x.paid_date,'yyyyMM') ) s where s.ym= date_format(a.paid_date,'yyyyMM') ) `93251`,
CASE WHEN (date_of_birth is not null AND LENGTH(TRIM(date_of_birth)) > 0) THEN 1 ELSE 0 END `92504`,
CASE WHEN (member_city IS NOT NULL AND LENGTH(TRIM(member_city)) > 0) THEN 1 ELSE 0 END `1638`,
CASE WHEN (member_city is NULL) THEN 1 ELSE 0 END `92961`,
CASE WHEN (member_state is NULL) THEN 1 ELSE 0 END `92621`,
CASE WHEN (member_state = getConstant("CLIENT_CODE")
) THEN 1 ELSE 0 END `2260`,
CASE WHEN (member_state IS NOT NULL AND LENGTH(TRIM(member_state)) > 0) THEN 1 ELSE 0 END `1961`,
CASE WHEN (member_zip_code IS NOT NULL AND LENGTH(TRIM(member_zip_code)) > 0) THEN 1 ELSE 0 END `1793`,
CASE WHEN (member_zip_code is NULL) THEN 1 ELSE 0 END `92622`,
CASE WHEN (( date_of_service_from > paid_date ) AND ( date_of_service_from is NOT NULL )
...
这个庞大的查询在其选择部分具有许多标量子查询。当我在本地计算机上测试代码时,我用“ / *此问题引起问题* /”提到的部分运行得很好(单击链接以查看屏幕截图): screen capture 但是当在Cloudera集群中运行针对相同文件的相同查询时,会出现以下错误:
java.lang.RuntimeException: Unexpected operator in scalar subquery: LocalRelation <empty>, [first(number_of_member_first_name, false)#405275L, ym#404801]
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.catalyst.optimizer.RewriteCorrelatedScalarSubquery$.evalPlan$1(subquery.scala:373)
任何人都可以帮助我弄清楚为什么它在我的本地计算机上可以正常运行,但是在Cloudera Cluster中出现错误?
答案 0 :(得分:0)
仔细调试后。似乎我的数据集的视图被删除了,因此它没有任何数据可提供给大多数外部查询,从而导致聚合函数具有空值并引发错误。