火花联接导致列ID模糊度错误

时间:2019-04-04 08:45:48

标签: apache-spark apache-spark-sql datastax

我有以下数据框:

accumulated_results_df
 |-- company_id: string (nullable = true)
 |-- max_dd: string (nullable = true)
 |-- min_dd: string (nullable = true)
 |-- count: string (nullable = true)
 |-- mean: string (nullable = true)

computed_df
 |-- company_id: string (nullable = true)
 |-- min_dd: date (nullable = true)
 |-- max_dd: date (nullable = true)
 |-- mean: double (nullable = true)
 |-- count: long (nullable = false)

尝试如下使用spark-sql进行联接

 val resultDf = accumulated_results_df.as("a").join(computed_df.as("c"), 
                             ( $"a.company_id" === $"c.company_id" ) && ( $"c.min_dd" > $"a.max_dd" ), "left")

给出错误为:

org.apache.spark.sql.AnalysisException: Reference 'company_id' is ambiguous, could be: a.company_id, c.company_id.;

我在这里做错什么以及如何解决?

2 个答案:

答案 0 :(得分:1)

应该使用 col 函数正确引用别名数据帧和列

val resultDf = (accumulated_results_df.as("a")
       .join(
           computed_df.as("c"),
           (col("a.company_id") === col("c.company_id")) && (col("c.min_dd") > col("a.max_dd")), 
           "left"
        )

答案 1 :(得分:0)

我已修复如下问题。

val resultDf = accumulated_results_df.join(computed_df.withColumnRenamed("company_id", "right_company_id").as("c"), 
                             (  accumulated_results_df("company_id" ) === $"c.right_company_id" && ( $"c.min_dd" > accumulated_results_df("max_dd") ) )
                        , "left")