在apache spark中复制记录计数

时间:2018-05-03 03:26:15

标签: scala apache-spark pyspark apache-spark-sql spark-dataframe

这是此问题的扩展,Apache Spark group by combining types and sub types

val sales = Seq(
  ("Warsaw", 2016, "facebook","share",100),
  ("Warsaw", 2017, "facebook","like",200),
  ("Boston", 2015,"twitter","share",50),
  ("Boston", 2016,"facebook","share",150),
  ("Toronto", 2017,"twitter","like",50)
).toDF("city", "year","media","action","amount")

对于该解决方案一切都很好,但是预期的输出应该有条件地计入不同的类别。

因此,输出应该如下,

+-------+--------+-----+
| Boston|facebook|    1|
| Boston| share1 |    2|
| Boston| share2 |    2|
| Boston| twitter|    1|
|Toronto| twitter|    1|
|Toronto| like   |    1|
| Warsaw|facebook|    2|
| Warsaw|share1  |    1|
| Warsaw|share2  |    1|
| Warsaw|like    |    1|
+-------+--------+-----+

如果动作是共享的话,我需要在share1和share2中计算。当我以编程方式计算它时,我使用case语句并说出当action是共享时的情况,share1 = share1 + 1,share2 = share2 + 1

但是我如何在Scala或pyspark或sql中执行此操作?

1 个答案:

答案 0 :(得分:1)

简单filterunions可以为您提供所需的输出

val media = sales.groupBy("city", "media").count()

val action = sales.groupBy("city", "action").count().select($"city", $"action".as("media"), $"count")

val share = action.filter($"media" === "share")

  media.union(action.filter($"media" =!= "share"))
      .union(share.withColumn("media", lit("share1")))
      .union(share.withColumn("media", lit("share2")))
      .show(false)

应该给你

+-------+--------+-----+
|city   |media   |count|
+-------+--------+-----+
|Boston |facebook|1    |
|Boston |twitter |1    |
|Toronto|twitter |1    |
|Warsaw |facebook|2    |
|Warsaw |like    |1    |
|Toronto|like    |1    |
|Boston |share1  |2    |
|Warsaw |share1  |1    |
|Boston |share2  |2    |
|Warsaw |share2  |1    |
+-------+--------+-----+