Spark Dataframe - 将行作为输入的方法& dataframe有输出

时间:2018-01-19 02:48:41

标签: scala apache-spark spark-dataframe

我需要编写一个迭代DF2中所有行的方法,并根据某些条件生成一个Dataframe。

这是输入DF1& DF2:

val df1Columns = Seq("Eftv_Date","S_Amt","A_Amt","Layer","SubLayer")
val df2Columns = Seq("Eftv_Date","S_Amt","A_Amt")
var df1 = List(
      List("2016-10-31","1000000","1000","0","1"),
      List("2016-12-01","100000","950","1","1"),
      List("2017-01-01","50000","50","2","1"),
      List("2017-03-01","50000","100","3","1"),
      List("2017-03-30","80000","300","4","1")
    )
      .map(row =>(row(0), row(1),row(2),row(3),row(4))).toDF(df1Columns:_*)

+----------+-------+-----+-----+--------+
| Eftv_Date|  S_Amt|A_Amt|Layer|SubLayer|
+----------+-------+-----+-----+--------+
|2016-10-31|1000000| 1000|    0|       1|
|2016-12-01| 100000|  950|    1|       1|
|2017-01-01|  50000|   50|    2|       1|
|2017-03-01|  50000|  100|    3|       1|
|2017-03-30|  80000|  300|    4|       1|
+----------+-------+-----+-----+--------+

val df2 = List(
  List("2017-02-01","0","400")
).map(row =>(row(0), row(1),row(2))).toDF(df2Columns:_*)

+----------+-----+-----+
| Eftv_Date|S_Amt|A_Amt|
+----------+-----+-----+
|2017-02-01|    0|  400|
+----------+-----+-----+

现在我需要编写一个方法,根据DF2每行的Eftv_Date值过滤DF1。 例如,df2.Eftv_date的第一行= 2017年2月1日,因此需要过滤记录Eftv_date小于或等于2017年2月1日的df1。因此,这将生成3条记录,如下所示:

预期结果:

+----------+-------+-----+-----+--------+
| Eftv_Date|  S_Amt|A_Amt|Layer|SubLayer|
+----------+-------+-----+-----+--------+
|2016-10-31|1000000| 1000|    0|       1|
|2016-12-01| 100000|  950|    1|       1|
|2017-01-01|  50000|   50|    2|       1|
+----------+-------+-----+-----+--------+

我已经编写了如下方法,并使用map函数调用它。

def transformRows(row: Row ) = {
  val dateEffective = row.getAs[String]("Eftv_Date")
  val df1LayerMet    =  df1.where(col("Eftv_Date").leq(dateEffective))
  df1 = df1LayerMet
  df1
} 

val x = df2.map(transformRows)

但是在调用时我遇到了这个错误:

Error:(154, 24) Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
val x = df2.map(transformRows)

注意:我们可以使用join实现它,但是我需要实现一个自定义scala方法来执行此操作,因为涉及很多转换。为简单起见,我只提到了一个条件。

1 个答案:

答案 0 :(得分:2)

似乎你需要一个非等联接:

df1.alias("a").join(
    df2.select("Eftv_Date").alias("b"), 
    df1("Eftv_Date") <= df2("Eftv_Date")          // non-equi join condition
).select("a.*").show
+----------+-------+-----+-----+--------+
| Eftv_Date|  S_Amt|A_Amt|Layer|SubLayer|
+----------+-------+-----+-----+--------+
|2016-10-31|1000000| 1000|    0|       1|
|2016-12-01| 100000|  950|    1|       1|
|2017-01-01|  50000|   50|    2|       1|
+----------+-------+-----+-----+--------+