我有一个数据框对象,如下所示:
+--+----+----+----+----+----+----+----+----+----+-----+
|id|bin1|bin2|bin3|bin4|bin5|bin6|bin7|bin8|bin9|bin10|
+--+----+----+----+----+----+----+----+----+----+-----+
|a | 1|null|null|null|null| 1| 14| 91| 929| null|
|c | 4| 2| 5| 82| 49| 176| 222| 439|null| null|
|f | 1| 1|null|null| 2| 8| 226| 294| 2| null|
|e |null| 1| 2|null| 4| 13| 19| 242| 752| 1|
|y | 1| 1| 3| 9| 11| 17| 136| 664| 338| null|
|e | 4| 2| 1| 8| 14| 169| 952| 431|null| null|
如何用相对值(频率)替换绝对值?
编辑:转换后,第一行的dataframe对象应如下所示:
+--+----+----+----+----+----+----+-----+------+-----+-----+
|id|bin1|bin2|bin3|bin4|bin5|bin6|bin7 |bin8 |bin9 |bin10|
+--+----+----+----+----+----+----+-----+------+-----+-----+
|a | 0.0|null|null|null|null| 0.0| 0.01| 0.09| 0.90| null|
算法应该将单元格的每个值除以行的总和。在此转换之后,行的总和始终为1.
我想我可以通过地图实现它,但我不知道该怎么做。
答案 0 :(得分:1)
假设您希望null
被视为0,这是一个解决方案:
scala> var df = Seq((1d,2d,Double.NaN),(Double.NaN, 3d,4d), (5d, Double.NaN, 6d)).toDF("a", "b", "c")
df: org.apache.spark.sql.DataFrame = [a: double, b: double, c: double]
scala> df.show
+---+---+---+
| a| b| c|
+---+---+---+
|1.0|2.0|NaN|
|NaN|3.0|4.0|
|5.0|NaN|6.0|
+---+---+---+
scala> val cols = df.columns
cols: Array[String] = Array(a, b, c)
scala> import org.apache.spark.sql.DataFrameNaFunctions
scala> df = df.na.fill(0d).withColumn("sum", cols.map(col).reduce(_ + _))
df: org.apache.spark.sql.DataFrame = [a: double, b: double, c: double, sum: double]
scala> df.show
+---+---+---+----+
| a| b| c| sum|
+---+---+---+----+
|1.0|2.0|0.0| 3.0|
|0.0|3.0|4.0| 7.0|
|5.0|0.0|6.0|11.0|
+---+---+---+----+
scala> cols.foreach( cName => df = df.withColumn(cName, df.col(cName) / df.col("sum")))
scala> df.drop("sum").show
+-------------------+-------------------+------------------+
| a| b| c|
+-------------------+-------------------+------------------+
| 0.3333333333333333| 0.6666666666666666| 0.0|
| 0.0|0.42857142857142855|0.5714285714285714|
|0.45454545454545453| 0.0|0.5454545454545454|
+-------------------+-------------------+------------------+