根据条件获取第一行

时间:2019-07-25 06:37:58

标签: scala apache-spark

我有一个数据框,我想在第一行的indicator列为0。例如,我的数据框如下所示:

network   volume  indicator  Hour
YYY       20      1          10
YYY       30      0          9
YYY       40      0          8
YYY       80      1          7

TTT       50      0          10
TTT       40      1          8
TTT       10      0          4
TTT       10      1          2

结果应如下所示:

network   volume  indicator  Hour
YYY       20      1          10
YYY       30      0          9
YYY       80      1          7

TTT       50      0          10
TTT       40      1          8
TTT       10      1          2

因此,那些有一个的仍然会保留,而我第一次获得每个网络的指标为0的信息。我要在执行此操作时按小时降序对所有内容进行排序,因此我得到了最新的0指示器。我该如何达到这个结果?

1 个答案:

答案 0 :(得分:1)

这是您所需的代码,并带有内联注释以帮助您理解:(使用最新数据集更新了输出,指标列中有多个1)

sourceData.show()

+-------+------+---------+----+
|network|volume|indicator|Hour|
+-------+------+---------+----+
|    YYY|    20|        1|  10|
|    YYY|    30|        0|   9|
|    YYY|    40|        0|   8|
|    YYY|    80|        1|   7|
|    TTT|    50|        0|  10|
|    TTT|    40|        1|   8|
|    TTT|    10|        0|   4|
|    TTT|    10|        1|   2|
+-------+------+---------+----+


sourceData.printSchema()

root
  |-- network: string (nullable = true)
  |-- volume: integer (nullable = true)
  |-- indicator: integer (nullable = true)
  |-- Hour: integer (nullable = true)

所需的转换代码:

//splitting your data set into two parts with indicator 1 and 0
val indicator1Df = sourceData.filter("indicator == 1")
val indicator0Df = sourceData.filter("indicator == 0")

//getting the first row for all indicator=0
indicator0Df.createOrReplaceTempView("indicator0")
val firstIndicator0df = spark.sql("select network, volume, indicator, hour from (select i0.network,i0.volume,i0.indicator,i0.hour,ROW_NUMBER() over (partition by i0.network order by i0.Hour desc) as rnk from indicator0 i0) i where rnk = 1")

//merging both the dataframes back to for your required output result
val finalDf = indicator1Df.union(firstIndicator0df).orderBy($"network".desc,$"Hour".desc)

finalDf.show()

最终输出:

+-------+------+---------+----+
|network|volume|indicator|Hour|
+-------+------+---------+----+
|    YYY|    20|        1|  10|
|    YYY|    30|        0|   9|
|    YYY|    80|        1|   7|
|    TTT|    50|        0|  10|
|    TTT|    40|        1|   8|
|    TTT|    10|        1|   2|
+-------+------+---------+----+