Scala UDF-以特定格式组合列值

时间:2018-09-05 15:24:32

标签: scala apache-spark dataframe user-defined-functions

DF1是我现在拥有的,我想使DF1看起来像DF2。

所需的输出:

DUMP=

我有以下代码:

 DF1                                                           DF2
+---------+----------------------------------------+          +---------+-------------------------------------------------------------------+
|   ID    |         Category                       |          |   ID    |                  category_name                                    |
+---------+----------------------------------------+          +---------+-------------------------------------------------------------------+  
|  31898  |   CP Bill Payment                      |          |  31898  |  CP Bill Payment + CP e-Transfer + CP IMT (CPS Limit + CPS Payee) |  
|  31898  |   CP e-Transfer + CP IMT               |          |  32614  |  CP Bill Payment + CP e-Transfer + CP Other Transfer (CPS Blocked)|
|  31898  |   CPS Limit + CPS Payee                |          |  35431  |  CP Bill Payment + CP e-Transfer                                  |
|  32614  |   CP e-Transfer + CP Other Transfer    |          |  33987  |  CP IMT (CPS Limit)                                               |
|  32614  |   CP Bill Payment                      |  =====>  |  35672  |  CPS Blocked                                                      |
|  32614  |   CPS Blocked                          |  =====>  |  37612  |  CPS Blocked + CPS Stop/Cancel/Reverse                            |
|  35431  |   CP e-Transfer                        |          +---------+-------------------------------------------------------------------+
|  35431  |   CP Bill Payment                      |
|  33987  |   CP IMT                               |
|  33987  |   CPS Limit                            |
|  35672  |   CPS Blocked                          |
|  37612  |   CPS Blocked + CPS Stop/Cancel/Reverse|
+---------+----------------------------------------+

逻辑是对于相同的ID 31898/32614/33987:如果包含CP *和CPS *,则应为CP *(CPS *)或CP * + CP *(CPS *); 35431:如果数组中没有CPS *,则只需使用+连接数组中的所有元素; 35672/37612:否则,仅数组中的元素。顺便说一句,类别应按升序排序。

代码正在运行,可能组合太多。如何使用UDF做同样的事情?还是有任何内置功能可以做到这一点?预先谢谢你

2 个答案:

答案 0 :(得分:1)

这是如何使用UDAF的示例。显然,您不需要UDAF即可通过id串联列值,但是它允许添加更多逻辑。例如,要通过ID字段连接值,可以创建UDAF,如:

class ConcatenateStrings extends UserDefinedAggregateFunction {
  override def inputSchema: StructType = StructType(StructField("input", StringType) :: Nil)

  override def bufferSchema: StructType = StructType(StructField("pair", StringType) :: Nil)

  override def dataType: DataType = StringType

  override def deterministic: Boolean = true

  override def initialize(buffer: MutableAggregationBuffer): Unit = buffer(0) = ""

  override def update(buffer: MutableAggregationBuffer, input: Row): Unit = {
      val b = buffer.getAs[String](0)
      val i = input.getAs[String](0)
      buffer(0) = { if(b.isEmpty) b + i else b + " + " + i }
  }

  override def merge(buffer1: MutableAggregationBuffer, buffer2: Row): Unit = {
    val b1 = buffer1.getAs[String](0)
    val b2 = buffer2.getAs[String](0)
    if(!b1.isEmpty)
      buffer1(0) = (b1) ++ " + " ++ (b2)
    else
      buffer1(0) = b2
  }

  override def evaluate(buffer: Row): Any = {
    val yourString = buffer.getAs[String](0)
    // Compute your logic and return another String
    yourString + "@procesed"
  }
}

然后您可以将其包含在汇总调用中:

object testAppl0 {

  def main(args: Array[String]) : Unit = {

    val agg0 = new ConcatenateStrings()

    implicit val spark: SparkSession =
      SparkSession
        .builder()
        .appName("Test")
        .master("local[1]")
        .getOrCreate()

    import spark.implicits._

    val rows = Seq(Row(31898,"CP Bill Payment"), Row(31898,"CP e-Transfer + CP IMT"), Row(31898,"CPS Limit + CPS Payee "))

    val schema = List(
      StructField("ID", IntegerType, true),
      StructField("Category", StringType, true))

    val df =  spark.createDataFrame(
      spark.sparkContext.parallelize(rows),
      StructType(schema)
    )

    df.groupBy("ID").agg(agg0($"Category")).show(false)

  }
}

它将返回新列“ concatenatestrings(Category)”:

+-----+--------------------------------------------------------------------------+
|ID   |concatenatestrings(Category)                                              |
+-----+--------------------------------------------------------------------------+
|31898|CP Bill Payment + CP e-Transfer + CP IMT + CPS Limit + CPS Payee @procesed|
+-----+--------------------------------------------------------------------------+

检查一下,也许可以帮忙

答案 1 :(得分:1)

我现在能想到的:

//UDF
def mapColumn(col: String) = udf { (xs: Seq[String]) => 
                        xs.map { x => 
                          if (x.contains(col+" ")) x else null
                        }.filter(_ != null).mkString(" + ")
                     }

import org.apache.spark.sql.functions._
val df1 = df.groupBy("Id").agg(
                               mapColumn("CP")(sort_array(collect_set("Category"))).as("CategorySetCP"),
                               mapColumn("CPS")(sort_array(collect_set("Category"))).as("CategorySetCPS")
                               ).withColumn("CategorySetCPS_New",concat(lit(" ("),'CategorySetCPS,lit(")")))
                               .withColumn("category_name",
                                           when(length($"CategorySetCP") > 0 and length($"CategorySetCPS") > 0,concat($"CategorySetCP",$"CategorySetCPS_New")).
                                           otherwise(when(length($"CategorySetCP") >0 and length($"CategorySetCPS") === 0,$"CategorySetCP").
                                           otherwise($"CategorySetCPS"))
                                           )
           .select('Id,'category_name)

df1.show(false)

输出:

+-----+-----------------------------------------------------------------+
|Id   |category_name                                                    |
+-----+-----------------------------------------------------------------+
|33987|CP IMT (CPS Limit)                                               |
|32614|CP Bill Payment + CP e-Transfer + CP Other Transfer (CPS Blocked)|
|35672|CPS Blocked                                                      |
|35431|CP Bill Payment + CP e-Transfer                                  |
|31898|CP Bill Payment + CP e-Transfer + CP IMT (CPS Limit + CPS Payee) |
|35612|CPS Blocked + CPS Stop/Cancel/Reverse                            |
+-----+-----------------------------------------------------------------+       

希望这会有所帮助!