有没有一种方法可以基于Scala数组过滤或标记行?
请记住,实际上行数要大得多。
样本数据
val clients= List(List("1", "67") ,List("2", "77") ,List("3", "56"),List("4","90")).map(x =>(x(0), x(1)))
val df = clients.toDF("soc","ages")
+---+----+
|soc|ages|
+---+----+
| 1| 67|
| 2| 77|
| 3| 56|
| 4| 90|
| ..| ..|
+---+----+
我想过滤一个Scala数组中的所有年龄段
var z = Array(90, 56,67).
df.where(($"ages" IN z)
或
df..withColumn("flag", when($"ages" >= 30 , 1)
.otherwise(when($"ages" <= 5, 2)
.otherwise(3))
答案 0 :(得分:3)
您还可以通过对数组使用_*
运算符将每个元素作为arg传递。
然后写一个案例 when otherwise using isin
Ex:
val df1 = Seq((1, 67), (2, 77), (3, 56), (4, 90)).toDF("soc", "ages")
val z = Array(90, 56,67)
df1.withColumn("flag",
when('ages.isin(z: _*), "in Z array")
.otherwise("not in Z array"))
.show(false)
+---+----+--------------+
|soc|ages|flag |
+---+----+--------------+
|1 |67 |in Z array |
|2 |77 |not in Z array|
|3 |56 |in Z array |
|4 |90 |in Z array |
+---+----+--------------+
答案 1 :(得分:2)
一个选项是udf。
scala> val df1 = Seq((1, 67), (2, 77), (3, 56), (4, 90)).toDF("soc", "ages")
df1: org.apache.spark.sql.DataFrame = [soc: int, ages: int]
scala> df1.show
+---+----+
|soc|ages|
+---+----+
| 1| 67|
| 2| 77|
| 3| 56|
| 4| 90|
+---+----+
scala> val scalaAgesArray = Array(90, 56,67)
scalaAgesArray: Array[Int] = Array(90, 56, 67)
scala> val containsAgeUdf = udf((x: Int) => scalaAgesArray.contains(x))
containsAgeUdf: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,BooleanType,Some(List(IntegerType)))
scala> val outputDF = df1.withColumn("flag", containsAgeUdf($"ages"))
outputDF: org.apache.spark.sql.DataFrame = [soc: int, ages: int ... 1 more field]
scala> outputDF.show(false)
+---+----+-----+
|soc|ages|flag |
+---+----+-----+
|1 |67 |true |
|2 |77 |false|
|3 |56 |true |
|4 |90 |true |
+---+----+-----+