是否有可能根据先前列的最大值添加新列,其中先前列是字符串文字。考虑以下数据框:
df = spark.createDataFrame(
[
('1',25000,"black","black","white"),
('2',16000,"red","black","white"),
],
['ID','cash','colour_body','colour_head','colour_foot']
)
然后目标框架应如下所示:
df = spark.createDataFrame(
[
('1',25000,"black","black","white", "black" ),
('2',16000,"red","black","white", "white" ),
],
['ID','cash','colour_body','colour_head','colour_foot', 'max_v']
)
如果没有最大可检测值,则应使用最后一个有效颜色。
是否存在某种反制可能性或udf?
答案 0 :(得分:1)
在statistics.mode
周围定义UDF,以使用所需的语义计算行模式:
import statistics
from pyspark.sql.functions import udf, col
from pyspark.sql.types import StringType
def mode(*x):
try:
return statistics.mode(x)
except statistics.StatisticsError:
return x[-1]
mode = udf(mode, StringType())
df.withColumn("max_v", mode(*[col(c) for c in df.columns if 'colour' in c])).show()
+---+-----+-----------+-----------+-----------+-----+
| ID| cash|colour_body|colour_head|colour_foot|max_v|
+---+-----+-----------+-----------+-----------+-----+
| 1|25000| black| black| white|black|
| 2|16000| red| black| white|white|
+---+-----+-----------+-----------+-----------+-----+
答案 1 :(得分:1)
对于任意数量的列,通常使用udf
solution by @cs95。
但是,在这种只有3列的特定情况下,您实际上可以仅使用pyspark.sql.functions.when
(即more efficient than using a udf
)来简化逻辑。
from pyspark.sql.functions import col, when
def mode_of_3_cols(body, head, foot):
return(
when(
(body == head)|(body == foot),
body
).when(
(head == foot),
head
).otherwise(foot)
)
df.withColumn(
"max_v",
mode_of_3_cols(col("colour_body"), col("colour_head"), col("colour_foot"))
).show()
#+---+-----+-----------+-----------+-----------+-----+
#| ID| cash|colour_body|colour_head|colour_foot|max_v|
#+---+-----+-----------+-----------+-----------+-----+
#| 1|25000| black| black| white|black|
#| 2|16000| red| black| white|white|
#+---+-----+-----------+-----------+-----------+-----+
您只需要检查两列是否相等-如果是,则该值必须是模式。如果不是,则返回最后一列。