将sqlContext数据框转换为pandas数据框时出错

时间:2018-10-25 10:48:01

标签: python pandas apache-spark dataframe

我有一个sqlContext df作为df2。

在其上运行show命令,将给出以下输出。

df2.show(5)
+--------------+-----------+-------------------+-------------------+
|          name|    channel|         start_time|           end_time|
+--------------+-----------+-------------------+-------------------+
|  Sohvaperunat|    Yle TV2|2018-04-14 04:07:54|2018-04-14 04:54:38|
|   Sisarvaimot|TLC Finland|2018-04-14 12:25:00|2018-04-14 13:25:00|
|   Sisarvaimot|TLC Finland|2018-04-15 00:55:00|2018-04-15 01:55:00|
|    Onnela (S)|       MTV3|2018-04-15 15:25:00|2018-04-15 15:55:00|
|X Factor Suomi|       MTV3|2018-04-15 19:30:00|2018-04-15 21:00:00|
+--------------+-----------+-------------------+-------------------+
only showing top 5 rows

但是为了便于处理,尝试将其转换为pandas Df会出现以下错误。

df2_pdf = df2.toPandas()

Py4JJavaError: An error occurred while calling o285.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 3.0 failed 1 times, most recent failure: Lost task 1.0 in stage 3.0 (TID 6, localhost, executor driver): TaskResultLost (result lost from block manager)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)

我的运行方式是否存在错误

0 个答案:

没有答案