在Scala中将多个列写入单个函数的技术

时间:2019-07-11 14:02:11

标签: scala apache-spark

以下是我尝试查找的使用Spark Scala的两种方法,如果该列包含一个字符串,然后将出现的次数相加(1或0),是否有更好的方法将其写入单个函数,其中我们可以避免每次添加新条件时都编写方法。预先感谢。

 def sumFunctDays1cols(columnName: String, dayid: String, processday: String, fieldString: String, newColName: String): Column = {
sum(when(('visit_start_time > dayid).and('visit_start_time <= processday).and(lower(col(columnName)).contains(fieldString)), 1).otherwise(0)).alias(newColName) }


 def sumFunctDays2cols(columnName: String, dayid: String, processday: String, fieldString1: String, fieldString2: String, newColName: String): Column = {
sum(when(('visit_start_time > dayid).and('visit_start_time <= processday).and(lower(col(columnName)).contains(fieldString1) || lower(col(columnName)).contains(fieldString2)), 1).otherwise(0)).alias(newColName) }

下面是我调用该函数的地方。

sumFunctDays1cols("columnName", "2019-01-01", "2019-01-10", "mac", "cust_count")
sumFunctDays1cols("columnName", "2019-01-01", "2019-01-10", "mac", "lenovo","prod_count")

2 个答案:

答案 0 :(得分:0)

您可以执行以下操作(尚未测试)

def sumFunctDays2cols(columnName: String, dayid: String, processday: String, newColName: String, fields: Column*): Column = {
  sum(
    when(
      ('visit_start_time > dayid)
        .and('visit_start_time <= processday)
        .and(fields.map(lower(col(columnName)).contains(_)).reduce( _ || _)),
      1
    ).otherwise(0)).alias(newColName)
}

您可以将其用作

sumFunctDays2cols(
  "columnName",
  "2019-01-01", 
  "2019-01-10",
  "prod_count",
  col("lenovo"),col("prod_count")
)

希望这会有所帮助!

答案 1 :(得分:0)

使函数的参数成为列表,而不是String1,String2 ..,使参数成为字符串列表。 我为您实现了一个小例子:

import org.apache.spark.sql.functions.udf

  val df = Seq(
    (1, "mac"),
    (2, "lenovo"),
    (3, "hp"),
    (4, "dell")).toDF("id", "brand")

  // dictionary Set of words to check

  val dict = Set("mac","leno","noname")

  val checkerUdf = udf { (s: String) => dict.exists(s.contains(_) )}

  df.withColumn("brand_check", checkerUdf($"brand")).show()

我希望这可以解决您的问题。但是,如果您仍然需要帮助,请上传整个代码段,我将为您提供帮助。