我试图通过减去平均值并除以每列的stddev来规范化火花数据帧中多列的值。这是我到目前为止的代码:
from pyspark.sql import Row
from pyspark.sql.functions import stddev_pop, avg
df = spark.createDataFrame([Row(A=1, B=6), Row(A=2, B=7), Row(A=3, B=8),
Row(A=4, B=9), Row(A=5, B=10)])
exprs = [x - (avg(x)) / stddev_pop(x) for x in df.columns]
df.select(exprs).show()
这给了我结果:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| null| null|
+------------------------------+------------------------------+
我希望的地方:
+------------------------------+------------------------------+
|(A - (avg(A) / stddev_pop(A)))|(B - (avg(B) / stddev_pop(B)))|
+------------------------------+------------------------------+
| -1.414213562| -1.414213562|
| -0.707106781| -0.707106781|
| 0| 0|
| 0.707106781| 0.707106781|
| 1.414213562| 1.414213562|
+------------------------------+------------------------------+
我相信我可以使用mllib中的StandardScaler类来完成此操作,但如果可能的话,我更愿意只使用数据框API - 仅作为学习练习。
答案 0 :(得分:4)
感谢答案here,我提出了这个问题:
from pyspark.sql.functions import stddev_pop, avg, broadcast
cols = df.columns
stats = (df.groupBy().agg(
*([stddev_pop(x).alias(x + '_stddev') for x in cols] +
[avg(x).alias(x + '_avg') for x in cols])))
df = df.join(broadcast(stats))
exprs = [(df[x] - df[x + '_avg']) / df[x + '_stddev'] for x in cols]
df.select(exprs).show()
+------------------------+------------------------+
|((A - A_avg) / A_stddev)|((B - B_avg) / B_stddev)|
+------------------------+------------------------+
| -1.414213562373095| -1.414213562373095|
| -0.7071067811865475| -0.7071067811865475|
| 0.0| 0.0|
| 0.7071067811865475| 0.7071067811865475|
| 1.414213562373095| 1.414213562373095|
+------------------------+------------------------+