如何基于案例类在Spark DF中动态重命名列

时间:2020-06-22 18:45:29

标签: scala apache-spark

尝试基于case类(JSON文件)重命名现有列。

例如:

// Case class mapped to the JSON config
case class ObjMapping (colName:String,
                  renameCol:Option[String],
                  colOrder:Option[Integer]               
                  )
// JSON config mapped to case class
val newOb = List(ObjMapping("DEPTID",Some("DEPT_ID"),Some(1)), ObjMapping("EMPID",Some("EMP_ID"),Some(4)), ObjMapping("DEPT_NAME",None,Some(2)), ObjMapping("EMPNAME",Some("EMP_NAME"),Some(3)))

样本源DF。

val empDf = Seq(
         (1,10,"IT","John"),
         (2,20,"DEV","Ed"),
         (2,30,"OPS","Brian")
).toDF("DEPTID","EMPID","DEPT_NAME","EMPNAME")

基于上述配置,想重命名DF列。因此,EMPID重命名为EMP_ID,EPNAME重命名为EMP_NAME,DEPTID重命名为DEPT_ID。

1 个答案:

答案 0 :(得分:2)

您可以这样:

import org.apache.spark.sql.Column
import org.apache.spark.sql.functions._

val selectExpr : Seq[Column] = newOb
  .sortBy(_.colOrder)
  .map(om => col(om.colName).as(om.renameCol.getOrElse(om.colName)))

empDf
  .select(selectExpr:_*)
  .show()

给予:

+-------+---------+--------+------+
|DEPT_ID|DEPT_NAME|EMP_NAME|EMP_ID|
+-------+---------+--------+------+
|      1|       IT|    John|    10|
|      2|      DEV|      Ed|    20|
|      2|      OPS|   Brian|    30|
+-------+---------+--------+------+