调用split()函数时,列表中未出现“ split”的Pyspark错误

时间:2019-09-30 12:06:10

标签: apache-spark pyspark apache-spark-sql pyspark-sql

我已经创建了如下数据框

spark= SparkSession.builder.appName("test").getOrCreate()
categories=spark.read.text("resources/textFile/categories")
categories.show(n=2)
+------------+
|       value|
+------------+
|1,2,Football|
|  2,2,Soccer|
+------------+
only showing top 2 rows

现在,当我将此数据帧转换为RDD并尝试根据“,”(逗号)分割RDD的每一行时

crdd=categories.rdd.map(lambda line: line.split(',')[1])
crdd.foreach(lambda lin : print(lin))

将位置1的元素添加到crdd RDD时,出现以下错误

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 13, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "C:\Users\Downloads\bigdataSetup\spark-2.2.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\types.py", line 1504, in __getattr__
    idx = self.__fields__.index(item)
ValueError: 'split' is not in list

注意:此处的CSV格式数据仅是为了便于复制。

1 个答案:

答案 0 :(得分:1)

由于数据为CSV格式,因此可以使用read.csv API:

categories=spark.read.csv("resources/textFile/categories")

如下修改您的代码:

crdd = categories.rdd.map(lambda line: line.value.split(',')[1])

for i in crdd.take(10): print (i)