假设我有这个Spark数据框:
col1 | col2 | col3 | col4
a | g | h | p
r | i | h | l
f | j | z | d
a | j | m | l
f | g | h | q
f | z | z | a
...
我想取消透视列并通过出现来获取前n个元素的数组。例如,n = 3:
columnName | content
col1 | [f, a, r]
col2 | [g, j, i]
col3 | [h, z, m]
col4 | [l, a, d]
我使用以下代码设法将列名称连接到单个列中:
columnNames = output_df.columns
output_df = output_df.withColumn("columns", F.array([F.lit(x) for x in columnNames]))
我认为我可以使用爆炸功能,但不确定这是最有效的方法。
有什么建议吗?
谢谢
答案 0 :(得分:0)
除了手动计算所有事件外,我什么也看不到,这并不是很有效,我很高兴听到其他方法的消息。
但是,如果您不对性能问题感到不满,那就可以解决问题!
请注意,我正在scala中编写它,我将尝试将其翻译为pyspark,但是由于我以前从未做过,所以很难。
// Let's create a dataframe for reproductibility
val data = Seq(("a", "g", "h", "p"),
("r", "i", "h", "l"),
("f", "j", "z", "d"),
("a", "j", "m", "l"),
("f", "g", "h", "q"),
("f", "z", "z", "a"))
val df = data.toDF("col1", "col2", "col3", "col4")
// Let's add a constant 1, with the groupby sum that will give us the occurencies !
val dfWithFuturOccurences = df.withColumn("futur_occurences", F.lit(1))
// Your n value
val n = 3
// Here goes the magic
df.columns // For each column
.map(x =>
(x, dfWithFuturOccurences
.groupBy(x)
.agg(sum("futur_occurences").alias("occurences")) // Count occurences here
.orderBy(desc("occurences"))
.select(x)
.limit(n) // Select the top n elements
.rdd.map(r => r(0).toString).collect().toSeq) // Collect them and store them as a Seq of string
)
.toSeq
.toDF("col", "top_elements")
在PySpark中,可能是这样的:
import pyspark.sql.functions as F
data = list(map(lambda x:
(x,
[str(row[x]) for row in
dfWithFuturOccurences
.groupBy(x)
.agg(F.sum("futur_occurences").alias("occurences"))
.orderBy(desc("occurences"))
.select(x)
.limit(n)
.collect()]
)
, df.columns))
然后将您的数据转换为数据框,就可以了!
输出:
+----+------------+
| col|top_elements|
+----+------------+
|col1| [f, a, r]|
|col2| [g, j, z]|
|col3| [h, z, m]|
|col4| [l, p, d]|
+----+------------+