Pyspark尝试使用udf时抛出IllegalArgumentException:'不支持的类文件主要版本55'

时间:2019-06-05 07:38:05

标签: python python-2.7 pyspark pyspark-sql

在pyspark中使用udfs时出现以下问题。

只要我不使用任何udfs,我的代码就可以正常工作。执行简单的操作(如选择列)或使用sql函数(如concat)没有问题。一旦我对使用udf的DataFrame执行操作,程序就会崩溃,并出现以下异常:

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/jars/spark-unsafe_2.11-2.4.3.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
19/06/05 09:24:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
  File "/Users/szymonk/Desktop/Projects/SparkTest/Application.py", line 59, in <module>
    transformations.select(udf_example(col("gender")).alias("udf_example")).show()
  File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/sql/dataframe.py", line 378, in show
    print(self._jdf.showString(n, 20, vertical))
  File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/Users/szymonk/Desktop/Projects/SparkTest/venv/lib/python2.7/site-packages/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u'Unsupported class file major version 55'

我尝试按照Pyspark error - Unsupported class file major version 55中的建议更改JAVA_HOME,但没有帮助。

我的代码中没有花哨的东西。我只定义了一个简单的udf函数,该函数应在“性别”列中返回长度的值

from pprint import pprint
from pyspark.sql import SparkSession, Column
from pyspark.sql.functions import col, lit, struct, array, udf, concat, trim, when
from pyspark.sql.types import IntegerType

transformations = spark.read.csv("Resources/PersonalData.csv", header=True)

udf_example = udf(lambda x: len(x))
transformations.select(udf_example(col("gender")).alias("udf_example")).show()

我不确定这是否有意义,但我在Mac上使用Pycharm。

2 个答案:

答案 0 :(得分:0)

我找到了解决方法, 我不得不切换Pycharm的启动jdk(2xshift-> jdk->选择jdk 1.8)

答案 1 :(得分:0)

我刚刚从pySpark 2.4.7切换回2.4.2,它同时适用于python 3.6和3.7