基于scala中拆分列值的Spark数据帧重复行

时间:2020-01-14 19:49:53

标签: scala apache-spark apache-spark-sql databricks

我在scala中具有以下代码:

val  fullCertificateSourceDf = certificateSourceDf         
              .withColumn("Stage", when(col("Data.WorkBreakdownUp1Summary").isNotNull && col("Data.WorkBreakdownUp1Summary")=!="",                                                     rtrim(regexp_extract($"Data.WorkBreakdownUp1Summary","^.*?(?= - *[a-zA-Z])",0))).otherwise(""))
              .withColumn("SubSystem", when(col("Data.ProcessBreakdownSummaryList").isNotNull && col("Data.ProcessBreakdownSummaryList")=!="",                                         regexp_extract($"Data.ProcessBreakdownSummaryList","^.*?(?= - *[a-zA-Z])",0)).otherwise(""))
              .withColumn("System", when(col("Data.ProcessBreakdownUp1SummaryList").isNotNull && col("Data.ProcessBreakdownUp1SummaryList")=!="",                                         regexp_extract($"Data.ProcessBreakdownUp1SummaryList","^.*?(?= - *[a-zA-Z])",0)).otherwise(""))
              .withColumn("Facility", when(col("Data.ProcessBreakdownUp2Summary").isNotNull && col("Data.ProcessBreakdownUp2Summary")=!="",                                         regexp_extract($"Data.ProcessBreakdownUp2Summary","^.*?(?= - *[a-zA-Z])",0)).otherwise(""))
              .withColumn("Area", when(col("Data.ProcessBreakdownUp3Summary").isNotNull && col("Data.ProcessBreakdownUp3Summary")=!="",                                         regexp_extract($"Data.ProcessBreakdownUp3Summary","^.*?(?= - *[a-zA-Z])",0)).otherwise(""))
              .select("Data.ID",
                      "Data.CertificateID",
                      "Data.CertificateTag",
                      "Data.CertificateDescription",
                      "Data.WorkBreakdownUp1Summary",
                      "Data.ProcessBreakdownSummaryList",
                      "Data.ProcessBreakdownUp1SummaryList",
                      "Data.ProcessBreakdownUp2Summary",
                      "Data.ProcessBreakdownUp3Summary",
                      "Data.ActualStartDate",
                      "Data.ActualEndDate",
                      "Data.ApprovedDate",
                      "Data.CurrentState",
                      "DataType",
                      "PullDate",
                      "PullTime",
                      "Stage",
                      "System",
                      "SubSystem",
                      "Facility",
                      "Area"
                     )
                     .filter((col("Stage").isNotNull) && (length(col("Stage"))>0))
                     .filter(((col("SubSystem").isNotNull) && (length(col("SubSystem"))>0)) || ((col("System").isNotNull) && (length(col("System"))>0)) || ((col("Facility").isNotNull) && (length(col("Facility"))>0)) || ((col("Area").isNotNull) && (length(col("Area"))>0))
                      )
                     .select("*")

此数据框fullCertificateSourceDf包含以下数据:

Original Data

我为“简洁”隐藏了一些列。

我希望数据看起来像这样:

Target Data

我们分为两列:ProcessBreakdownSummaryList和ProcessBreakdownUp1SummaryList。它们都是逗号分隔的列表。

请注意,ProcessBreakdownSummaryList(CS10-100-22-10-矿井进气风扇加热器系统, CS10-100-81 -10-矿山服务开关设备)和ProcessBreakdownUp1SummaryList(CS10- 100-22-维修轴通风, CS10-100-81 -维修轴电气)是相同的,我们只应拆分一次。

但是,如果它们与ProcessBreakdownSummaryList(CS10-100-22-10-矿井进气风扇加热器系统, CS10-100-81 -10-矿山服务开关设备)和ProcessBreakdownUp1SummaryList( CS10-100-22-维修轴通风, CS10-100-34 -维修轴电气),应再次将其拆分成第三排。

在此先感谢您的帮助。

1 个答案:

答案 0 :(得分:0)

您可以用很多方法解决它,我认为复杂处理的最简单方法是使用scala。您可以阅读所有列,包括“ ProcessBreakdownSummaryList”和“ ProcessBreakdownUp1SummaryList”,比较它们的值是否相同/不同,并为单个输入行发出多行。然后在输出上进行平面映射,以获取包含所需所有行的数据框。

The parameters dictionary contains a null entry for parameter 'id' of non-nullable type 'System.Int32' for method 'System.Web.Mvc.ActionResult GetComments(Int32)' in 'EZTRACKER.Controllers.DashboardController'.

这里是将一行拆分为多个

的示例
val fullCertificateSourceDf = // your code

fullCertificateSourceDf.map{ row =>
val id = row.getAs[String]("Data.ID")
... read all columns

val processBreakdownSummaryList = row.getAs[String]("Data.ProcessBreakdownSummaryList")
val processBreakdownUp1SummaryList = row.getAs[String]("Data.ProcessBreakdownUp1SummaryList")

//split processBreakdownSummaryList on ","
//split processBreakdownUp1SummaryList on ","
//compare then for equality 
//lets say you end up with 4 rows.

//return Seq of those 4 rows in a list processBreakdownSummary
//return a List of tuple of strings like List((id, certificateId, certificateTag, ..distinct values of processBreakdownUp1SummaryList...), (...) ...)
//all columns id, certificateId, certificateTag etc are repeated for each distinct value of processBreakdownUp1SummaryList and processBreakdownSummaryList

}.flatMap(identity(_)).toDF("column1","column2"...)

结果看起来像

    val employees = spark.createDataFrame(Seq(("E1",100.0,"a,b"), ("E2",200.0,"e,f"),("E3",300.0,"c,d"))).toDF("employee","salary","clubs")

    employees.map{ r =>
      val clubs = r.getAs[String]("clubs").split(",")
      for{
        c : String <- clubs
      }yield(r.getAs[String]("employee"),r.getAs[Double]("salary"), c)
    }.flatMap(identity(_)).toDF("employee","salary","clubs").show(false)
相关问题