当我在pyspark中使用bucketizer时,我试图获取拆分值。 当前结果包含存储桶的索引:
data = [(0, -1.0), (1, 0.0), (2, 0.5), (3, 1.0), (4, 10.0),(5, 25.0),(6, 100.0),(7, 300.0),(8,float("nan"))]
df = spark.createDataFrame(data, ["id", "value"])
splits = [-float("inf"),0,0.001, 1, 5,10, 20, 30, 40, 50, 60, 70, 80, 90, 100, float("inf")]
result_bucketizer = Bucketizer(splits=splits, inputCol="value",outputCol="result").setHandleInvalid("keep").transform(df)
result_bucketizer.show()
结果是:
+---+-----+------+
| id|value|result|
+---+-----+------+
| 0| -1.0| 0.0|
| 1| 0.0| 1.0|
| 2| 0.5| 2.0|
| 3| 1.0| 3.0|
| 4| 10.0| 5.0|
| 5| 25.0| 6.0|
| 6|100.0| 14.0|
| 7|300.0| 14.0|
| 8| NaN| 15.0|
+---+-----+------+
我希望结果是:
+---+-----+------+
| id|value|result|
+---+-----+------+
| 0| -1.0| -inf|
| 1| 0.0| 0.0|
| 2| 0.5| 0.001|
| 3| 1.0| 1.0|
| 4| 10.0| 10.0|
| 5| 25.0| 20.0|
| 6|100.0| 100.0|
| 7|300.0| 100.0|
| 8| NaN| NaN|
+---+-----+------+
答案 0 :(得分:1)
这就是我的方法。
首先,我创建了数据框。
<Data ...>foo bar baz</Data>
然后,我将Bucketizer创建为一个单独的变量。
from pyspark.ml.feature import Bucketizer
from pyspark.sql.types import StringType
data = [(0, -1.0), (1, 0.0), (2, 0.5), (3, 1.0), (4, 10.0),(5, 25.0),(6, 100.0),(7, 300.0),(8,float("nan"))]
df = spark.createDataFrame(data, ["id", "value"])
splits = [-float("inf"),0,0.001, 1, 5,10, 20, 30, 40, 50, 60, 70, 80, 90, 100, float("inf")]
# here I created a dictionary with {index: name of split}
splits_dict = {i:splits[i] for i in range(len(splits))}
要获取标签,我只是使用我们之前定义的dict应用了replace函数。
# create bucketizer
bucketizer = Bucketizer(splits=splits, inputCol="value",outputCol="result")
# bucketed dataframe
bucketed = bucketizer.setHandleInvalid('skip').transform(df)
输出:
bucketed = bucketed.replace(to_replace=splits_dict, subset=['result'])
bucketed.show()