我编写了一个我想要应用于数据帧的函数,但首先我必须将数据帧转换为RDD才能映射。然后我打印所以我可以看到结果:
x = exploded.rdd.map(lambda x: add_final_score(x.toDF()))
print(x.take(2))
函数add_final_score采用数据帧,这就是为什么我必须在传递之前将x转换回DF。但是,它给我这个错误,toDF不在列表中:
Py4JJavaError Traceback (most recent call last)
<ipython-input-491-11e7b77ecf3f> in <module>()
42 # StructField('segmentName', StringType(), True)])
43 # x = exploded.rdd.map(lambda y: y.toDf())
---> 44 print(x.take(2))
~/spark-2.3.0-bin-hadoop2.7/python/pyspark/rdd.py in take(self, num)
1356
1357 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1358 res = self.context.runJob(self, takeUpToNumLeft, p)
1359
1360 items += res
~/spark-2.3.0-bin-hadoop2.7/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
999 # SparkContext#runJob.
1000 mappedRDD = rdd.mapPartitions(partitionFunc)
-> 1001 port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
1002 return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
1003
~/spark-2.3.0-bin-hadoop2.7/python/lib/py4j-0.10.6-src.zip/py4j/java_gateway.py in __call__(self, *args)
1158 answer = self.gateway_client.send_command(command)
1159 return_value = get_return_value(
-> 1160 answer, self.gateway_client, self.target_id, self.name)
1161
1162 for temp_arg in temp_args:
~/spark-2.3.0-bin-hadoop2.7/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~/spark-2.3.0-bin-hadoop2.7/python/lib/py4j-0.10.6-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
318 raise Py4JJavaError(
319 "An error occurred while calling {0}{1}{2}.\n".
--> 320 format(target_id, ".", name), value)
321 else:
322 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 78.0 failed 1 times, most recent failure: Lost task 0.0 in stage 78.0 (TID 78, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/lisa/spark-2.3.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/types.py", line 1556, in _getattr_
idx = self.__fields__.index(item)
ValueError: 'toDF' is not in list
这是什么意思?什么清单?
答案 0 :(得分:0)
&#34; toDF&#34;在DataFrame上工作就像你在这里看到的: https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=todf#pyspark.sql.DataFrame.toDF
在你的代码中,我猜测&#34;爆炸&#34;在使用&#34; .rdd&#34;之后是一个df在它上面,它变成了一个rdd。然后当你使用&#34; map&#34;你又回到了一个rdd。
你无法申请&#34; toDF&#34;在rdd行上。如果你想将rdd变回DataFrame,你需要使用类似的东西(取决于你的火花版本):https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=createdataframe#pyspark.sql.SparkSession.createDataFrame
此外,您无法使用&#34; map&#34;在数据框架上应用函数。因为rdd不能保存DataFrames,所以它有行