Pyspark数据框未正确分组

时间:2019-10-08 21:21:40

标签: pyspark pyspark-dataframes

我有一个成功运行了几个月的应用程序。最近,它开始失败,因为采用pyspark DataFrame进行分组,然后以某种方式输出响应的方法会输出损坏的数据帧。

这是当一切都失败时我正在做什么的示例代码:

from pyspark.sql.functions import sum, avg
group_by = pyspark_df_in.groupBy("dimension1", "dimension2", "dimension3")
pyspark_df_out = group_by.agg(sum("metric1").alias("MyMetric1"), sum("metric2").alias("MyMetric2")

如果我打印print(pyspark_df_in.head(1)),则可以正确获取数据集的第一行。按几个尺寸分组并打印print(pyspark_df_out.head(2))后,出现以下错误。我试图用这个新的按数据集分组的方法基本上做任何事情时都遇到类似的错误(而且我知道group by应该生成数据,因为我对此进行了确认)。

19/10/08 15:13:00 WARN TaskSetManager: Stage 9 contains a task of very large size (282 KB). The maximum recommended task size is 100 KB.
19/10/08 15:13:00 ERROR Executor: Exception in task 1.0 in stage 9.0 (TID 40)
java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
19/10/08 15:13:00 ERROR Executor: Exception in task 2.0 in stage 9.0 (TID 41)
java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 1.0 in stage 9.0 (TID 40, localhost, executor driver): java.util.NoSuchElementException
    at java.util.ArrayList$Itr.next(ArrayList.java:862)
    at org.apache.arrow.vector.VectorLoader.loadBuffers(VectorLoader.java:76)
    at org.apache.arrow.vector.VectorLoader.load(VectorLoader.java:61)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.nextBatch(ArrowConverters.scala:167)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anon$2.<init>(ArrowConverters.scala:144)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$.fromBatchIterator(ArrowConverters.scala:143)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:203)
    at org.apache.spark.sql.execution.arrow.ArrowConverters$$anonfun$3.apply(ArrowConverters.scala:201)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

19/10/08 15:13:00 ERROR TaskSetManager: Task 1 in stage 9.0 failed 1 times; aborting job
19/10/08 15:13:00 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 39, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 3.0 in stage 9.0 (TID 42, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 10.0 in stage 9.0 (TID 49, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 7.0 in stage 9.0 (TID 46, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 6.0 in stage 9.0 (TID 45, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 11.0 in stage 9.0 (TID 50, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 9.0 in stage 9.0 (TID 48, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 8.0 in stage 9.0 (TID 47, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 4.0 in stage 9.0 (TID 43, localhost, executor driver): TaskKilled (Stage cancelled)
19/10/08 15:13:00 WARN TaskSetManager: Lost task 5.0 in stage 9.0 (TID 44, localhost, executor driver): TaskKilled (Stage cancelled)

有关我的环境的信息:

  • 火花上下文版本= 2.4.3
  • Python版本= 3.7
  • OS = Linux CentOS 7

有人遇到这个问题或有任何想法如何调试/修复它吗?

0 个答案:

没有答案