我正尝试在数据框架上构建一个新列,如下所示:
l = [(2, 1), (1,1)]
df = spark.createDataFrame(l)
def calc_dif(x,y):
if (x>y) and (x==1):
return x-y
dfNew = df.withColumn("calc", calc_dif(df["_1"], df["_2"]))
dfNew.show()
但是,我明白了:
Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-2807412651452069487.py", line 346, in <module>
Exception: Traceback (most recent call last):
File "/tmp/zeppelin_pyspark-2807412651452069487.py", line 334, in <module>
File "<stdin>", line 38, in <module>
File "<stdin>", line 36, in calc_dif
File "/usr/hdp/current/spark2-client/python/pyspark/sql/column.py", line 426, in __nonzero__
raise ValueError("Cannot convert column into bool: please use '&' for 'and', '|' for 'or', "
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.
为什么会这样?我该如何解决?
答案 0 :(得分:4)
使用udf
:
from pyspark.sql.functions import udf
@udf("integer")
def calc_dif(x,y):
if (x>y) and (x==1):
return x-y
或(推荐)时的情况
from pyspark.sql.functions import when
def calc_dif(x,y):
when(( x > y) & (x == 1), x - y)
第一个计算Python对象,第二个计算Spark Columns
答案 1 :(得分:2)
抱怨是因为您为calc_dif函数提供了整个列对象,而不是相应行的实际数据。您需要使用udf
来包装calc_dif
函数:
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import udf
l = [(2, 1), (1,1)]
df = spark.createDataFrame(l)
def calc_dif(x,y):
# using the udf the calc_dif is called for every row in the dataframe
# x and y are the values of the two columns
if (x>y) and (x==1):
return x-y
udf_calc = udf(calc_dif, IntegerType())
dfNew = df.withColumn("calc", udf_calc("_1", "_2"))
dfNew.show()
# since x < y calc_dif returns None
+---+---+----+
| _1| _2|calc|
+---+---+----+
| 2| 1|null|
| 1| 1|null|
+---+---+----+
答案 2 :(得分:0)
对于有类似错误的任何人:当我需要一个Pandas对象并遇到相同的错误时,我试图传递rdd。显然,我可以通过“ .toPandas()”来解决它
答案 3 :(得分:0)
对于遇到相同错误消息的任何人,请检查括号。有时布尔表达式需要更具体的表达式,例如;
DF_New=
df1.withColumn('EventStatus',\
F.when(((F.col("Adjusted_Timestamp")) <\
(F.col("Event_Finish"))) &\
((F.col("Adjusted_Timestamp"))>\
F.col("Event_Start"))),1).otherwise(0))