例如,我想根据年龄将DataFrame
人分类到以下4个区域。
age_bins = [0, 6, 18, 60, np.Inf]
age_labels = ['infant', 'minor', 'adult', 'senior']
我会在pandas.cut()
中使用pandas
来执行此操作。我如何在PySpark
?
答案 0 :(得分:20)
您可以在spark中使用来自ml库的Bucketizer功能transfrom。
values = [("a", 23), ("b", 45), ("c", 10), ("d", 60), ("e", 56), ("f", 2), ("g", 25), ("h", 40), ("j", 33)]
df = spark.createDataFrame(values, ["name", "ages"])
from pyspark.ml.feature import Bucketizer
bucketizer = Bucketizer(splits=[ 0, 6, 18, 60, float('Inf') ],inputCol="ages", outputCol="buckets")
df_buck = bucketizer.setHandleInvalid("keep").transform(df)
df_buck.show()
输出
+----+----+-------+
|name|ages|buckets|
+----+----+-------+
| a| 23| 2.0|
| b| 45| 2.0|
| c| 10| 1.0|
| d| 60| 3.0|
| e| 56| 2.0|
| f| 2| 0.0|
| g| 25| 2.0|
| h| 40| 2.0|
| j| 33| 2.0|
+----+----+-------+
如果您需要每个存储桶的名称,可以使用udf创建一个包含存储桶名称的新列
from pyspark.sql.functions import udf
from pyspark.sql.types import *
t = {0.0:"infant", 1.0: "minor", 2.0:"adult", 3.0: "senior"}
udf_foo = udf(lambda x: t[x], StringType())
df_buck.withColumn("age_bucket", udf_foo("buckets")).show()
输出
+----+----+-------+----------+
|name|ages|buckets|age_bucket|
+----+----+-------+----------+
| a| 23| 2.0| adult|
| b| 45| 2.0| adult|
| c| 10| 1.0| minor|
| d| 60| 3.0| senior|
| e| 56| 2.0| adult|
| f| 2| 0.0| infant|
| g| 25| 2.0| adult|
| h| 40| 2.0| adult|
| j| 33| 2.0| adult|
+----+----+-------+----------+
答案 1 :(得分:0)
您还可以编写PySpark UDF:
def categorizer(age):
if age < 6:
return "infant"
elif age < 18:
return "minor"
elif age < 60:
return "adult"
else:
return "senior"
然后:
bucket_udf = udf(categorizer, StringType() )
bucketed = df.withColumn("bucket", bucket_udf("age"))
答案 2 :(得分:0)
就我而言,我不得不随机存储字符串值列,因此这需要我执行一些额外的步骤:
from pyspark.sql.types import LongType, IntegerType
import pyspark.sql.functions as F
buckets_number = 4 # number of buckets desired
df.withColumn("sub", F.substring(F.md5('my_col'), 0, 16)) \
.withColumn("translate", F.translate("sub", "abcdefghijklmnopqrstuvwxyz", "01234567890123456789012345").cast(LongType())) \
.select("my_col",
(F.col("translate") % (buckets_number + 1)).cast(IntegerType()).alias("bucket_my_col"))