内部联接使用Spark 2.1在DataFrame中不起作用

时间:2018-02-09 05:45:07

标签: apache-spark apache-spark-sql spark-dataframe apache-spark-2.0

我的数据集: -

emp dataframe looks like this :-
emp.show()

+---+-----+------+----------+-------------+
| ID| NAME|salary|department|         date|
+---+-----+------+----------+-------------+
|  1| sban| 100.0|        IT|   2018-01-10|
|  2|  abc| 200.0|        HR|   2018-01-05|
|  3| Jack| 100.0|      SALE|   2018-01-05|
|  4|  Ram| 100.0|        IT|2018-01-01-06|
|  5|Robin| 200.0|        IT|   2018-01-07|
|  6| John| 200.0|      SALE|   2018-01-08|
|  7| sban| 300.0|  Director|   2018-01-01|
+---+-----+------+----------+-------------+

2-然后我通过使用名称进行分组并获取其最高薪水,说数据帧是grpEmpByName: -

val grpByName = emp.select(col("name")).groupBy(col("name")).agg(max(col("salary")).alias("max_salary")) 
grpByName.select("*").show()
+-----+----------+
| name|max_salary|
+-----+----------+
| Jack|     100.0|
|Robin|     200.0|
|  Ram|     100.0|
| John|     200.0|
|  abc|     200.0|
| sban|     300.0|
+-----+----------+

3-然后尝试加入: -

val joinedBySalarywithMaxSal = emp.join(grpEmpByName, col("emp.salary") === col("grpEmpByName.max_salary") , "inner")

投掷

18/02/08 21:29:26 INFO CodeGenerator: Code generated in 13.667672 ms
Exception in thread "main" org.apache.spark.sql.AnalysisException: cannot resolve '`grpByName.max_salary`' given input columns: [NAME, department, date, ID, salary, max_salary, NAME];;
'Join Inner, (salary#2 = 'grpByName.max_salary)
:- Project [ID#0, NAME#1, salary#2, department#3, date#4]
:  +- MetastoreRelation default, emp
+- Aggregate [NAME#44], [NAME#44, max(salary#45) AS max_salary#25]
   +- Project [salary#45, NAME#44]
      +- Project [ID#43, NAME#44, salary#45, department#46, date#47]
         +- MetastoreRelation default, emp

我不知道为什么它不能像我检查时那样工作

 grpByName.select(col("max_salary")).show() 

+----------+
|max_salary|
+----------+
|     100.0|
|     200.0|
|     100.0|
|     200.0|
|     200.0|
|     300.0|
+----------+

提前致谢。

2 个答案:

答案 0 :(得分:2)

点符号用于引用表内的嵌套结构,而不是引用表本身。

请调用col上的DataFrame方法定义,如下所示:

emp.join(grpEmpByName, emp.col("salary") === grpEmpByName.col("max_salary"), "inner")

您可以看到示例here

此外,请注意joins are inner by default,因此您应该能够编写以下内容:

emp.join(grpEmpByName, emp.col("salary") === grpEmpByName.col("max_salary"))

答案 1 :(得分:1)

我不确定,希望可以提供帮助:

val joinedBySalarywithMaxSal = emp.join(grpEmpByName, emp.col("emp") === grpEmpByName.col("max_salary") , "inner")