我有一个Age课,一个csv文件和一个pyspark运行时会话
ages.csv
Name;Age
alpha;noise20noise
beta;noi 3 sE 0
gamma;n 4 oi 0 se
phi;n50ise
detla;3no5ise
kappa;No 4 i 5 sE
omega;25noIsE
(在解析“年龄”列之后)的含义实际上是:
Name;Age
alpha;20
beta;30
gamma;40
phi;50
detla;35
kappa;45
omega;25
定义的班级:年龄 age.py
import re
class Age:
# age is a number representing the age of a person
def __init__(self, age):
self.age = age
def __eq__(self, other):
return self.age == self.__parse(other)
def __lt__(self, other):
return self.age < self.__parse(other)
def __gt__(self, other):
return self.age > self.__parse(other)
def __le__(self, other):
return self.age <= self.__parse(other)
def __ge__(self, other):
return self.age >= self.__parse(other)
def __parse(self, age):
return int(''.join(re.findall(r'\d', age)))
# Let's test this class
if __name__ == '__main__':
print(Age(18) == 'noise18noise')
print(Age(18) <= 'aka 1 fakj 8 jal')
print(Age(18) >= 'jaa 18 ka')
print(Age(18) < '1 kda 9')
print(Age(18) > 'akfa 1 na 7 noise')
Output:
True
True
True
True
True
该测试确实有效。我想在pyspark中使用它
运行pyspark,阅读ages.csv并导入Age
Using Python version 3.6.7 (default, Oct 23 2018 19:16:44)
SparkSession available as 'spark'.
>>> ages = spark.read.csv('ages.csv', sep=';', header=True)
19/01/28 14:44:18 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
>>> ages.show()
+-----+------------+
| Name| Age|
+-----+------------+
|alpha|noise20noise|
| beta| noi 3 sE 0|
|gamma| n 4 oi 0 se|
| phi| n50ise|
|detla| 3no5ise|
|kappa| No 4 i 5 sE|
|omega| 25noIsE|
+-----+------------+
现在我想让所有年龄为20岁的人
>>> from age import Age
>>> ages.filter(ages.Age == Age(20)).show()
这是我得到的错误
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark-2.3.1-bin-hadoop2.7/python/pyspark/sql/column.py", line 116, in _
njc = getattr(self._jc, name)(jc)
File "/opt/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1248, in __call__
File "/opt/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1218, in _build_args
File "/opt/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1218, in <listcomp>
File "/opt/spark-2.3.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 298, in get_command_part
AttributeError: 'Age' object has no attribute '_get_object_id'
所以我的第一个问题是如何解决此错误
这是我首次尝试解决此问题:我将class Age
的定义更改为扩展str
,如下所示:
age.py
...
class Age(str):
....
第二次尝试:
>>> ages.filter(ages.Age == Age(20)).show()
+----+---+
|Name|Age|
+----+---+
+----+---+
尽管如此,我们仍然有:
>>> 'noise20noise' == Age(20)
True
您可以看到AttributeError: 'Age' object has no attribute '_get_object_id'
消失了,但却没有计算出正确的答案,这就是我的第二个问题
再次是我的尝试: 我使用pyspark用户定义的函数
>>> import pyspark.sql.functions as F
>>> import pyspark.sql.types as T
>>> eq20 = F.udf(lambda c: c == Age(20), T.BooleanType())
>>> ages.filter(eq20(ages.Age)).show()
+-----+------------+
| Name| Age|
+-----+------------+
|alpha|noise20noise|
+-----+------------+
现在这可行。 但是,这是事实: 我最喜欢第一个习语
>>> ages.filter(ages.Age == Age(20)).show()
更简单,更具表现力。我不想每次都定义一个像eq20, eq21, less_than50, greater_than30, etc
这样的函数
我可以在Age类别中进行定义,但是我不知道该怎么做。尽管如此,这是我到目前为止使用python decorator
age.py
# other imports here
...
import pyspark.sql.functions as F
import pyspark.sql.types as T
def connect_to_pyspark(function):
return F.udf(function, T.BooleanType())
class Age(str):
...
@connect_to_pyspark
def __eq__(self, other):
return self.age == self.__parse(other)
...
# do the same decorator for the other comparative methods
再次测试:
>>> ages.filter(ages.Age == Age(20)).show()
+----+---+
|Name|Age|
+----+---+
+----+---+
它不起作用。还是我的装饰工写得不好?
如何解决所有这些问题? 我对第一个问题的解决方案是否足够好?如果没有,该做什么呢?如果是,该如何解决第二个问题?
答案 0 :(得分:1)
获取ages.Age == Age(20)
将会非常困难,因为spark不遵守用于实现__eq__
的python约定。稍后会详细介绍,但是如果您可以做Age(20) == ages.Age
,那么您可以有一些选择。恕我直言,最简单的方法是仅将解析逻辑包装在udf中:
parse_udf = F.udf(..., T.IntegerType())
class Age:
...
def __eq__(self, other: Column):
return F.lit(self.age) == parse_udf(other)
请注意,Age
不是str
的子类,只会给世界带来伤害。如果要使用装饰器,则装饰器不应返回udf
,而应返回应用udf的函数。像这样:
import re
import pyspark.sql.functions as F
import pyspark.sql.types as T
def connect_to_pyspark(function):
def helper(age, other):
myUdf = F.udf(lambda item_from_other: function(age, item_from_other), T.BooleanType())
return myUdf(other)
return helper
class Age:
def __init__(self, age):
self.age = 45
def __parse(self, other):
return int(''.join(re.findall(r'\d', other)))
@connect_to_pyspark
def __eq__(self, other):
return self.age == self.__parse(other)
ages.withColumn("eq20", Age(20) == df.Age).show()
有关为什么需要使用Ages(20) == ages.Age
的更多信息。在python中,如果您执行a == b
并且a的类不知道如何与b比较,则它应该返回NotImplemented
,然后python会尝试b.__eq__(a)
,但是spark永远不会返回{{ 1}},因此NotImplemented
中的__eq__
仅在表达式((..