从一列连接值,然后建立另一列

时间:2018-11-27 14:13:00

标签: sql apache-spark apache-spark-sql databricks azure-databricks

我正在使用Spark SQL,并在Hive表上执行一些SQL操作。 我的桌子是这样的: ``

ID COST CODE
1  100  AB1
5  200  BC3
1  400  FD3
6  600  HJ2
1  900  432
3  800  DS2
2  500  JT4 

```

我想以此为基础创建另一个表,该表将在这样的另一列中具有总成本和前5个CODES链。

```

ID  TOTAL_COST  CODE  CODE_CHAIN
1   1400        432   432, FD3, AB1

```

总成本很容易,但是如何从CODE列中合并值并形成另一列。

我已经尝试过collect_set函数,但是这些值不能受到限制,也可能由于分布式处理而没有正确排序。

可以使用任何SQL逻辑吗?

编辑:

我需要对数据进行排序,所以我得到了前5个值。

2 个答案:

答案 0 :(得分:1)

使用res.set('X-XSS-Protection', '0'); slicesort_array

collect_list

在Spark 2.3中,您必须将import org.apache.spark.sql.functions._ df .groupBy("id") .agg( sum("cost") as "total_cost", slice(sort_array(collect_list(struct($"cost", $"code")), false), 1, 5)("code") as "codes") 替换为已排序数组的手动索引

slice

答案 1 :(得分:1)

使用窗口函数和with()表对第一个row_number进行过滤。检查一下:

scala> val df = Seq((1,100,"AB1"),(5,200,"BC3"),(1,400,"FD3"),(6,600,"HJ2"),(1,900,"432"),(3,800,"DS2"),(2,500,"JT4")).toDF("ID","COST","CODE")
df: org.apache.spark.sql.DataFrame = [ID: int, COST: int ... 1 more field]

scala> df.show()
+---+----+----+
| ID|COST|CODE|
+---+----+----+
|  1| 100| AB1|
|  5| 200| BC3|
|  1| 400| FD3|
|  6| 600| HJ2|
|  1| 900| 432|
|  3| 800| DS2|
|  2| 500| JT4|
+---+----+----+


scala> df.createOrReplaceTempView("course")

scala> spark.sql(""" with tab1(select id,cost,code,collect_list(code) over(partition by id order by cost desc rows between current row and 5 following ) cc, row_number() over(partition by id order by cost desc) rc,sum(cost) over(partition by id order by cost desc rows between current row and 5 following) total from course) select id, total, cc from tab1 where rc=1 """).show(false)
+---+-----+---------------+
|id |total|cc             |
+---+-----+---------------+
|1  |1400 |[432, FD3, AB1]|
|6  |600  |[HJ2]          |
|3  |800  |[DS2]          |
|5  |200  |[BC3]          |
|2  |500  |[JT4]          |
+---+-----+---------------+


scala>