火花。 ~1亿行。大小超过Integer.MAX_VALUE?

时间:2016-08-15 19:16:47

标签: apache-spark

(这是在小型三机Amazon EMR集群上运行的Spark 2.0)

我有一个PySpark作业,它将一些大型文本文件加载到Spark RDD中,count()成功返回158,598,155。

然后该作业将每一行解析为pyspark.sql.Row实例,构建一个DataFrame,并进行另一次计数。 DataFrame上的第二个count()会在Spark内部代码Size exceeds Integer.MAX_VALUE中导致异常。这适用于较小的数据量。有人可以解释为什么会发生这种情况/怎么发生?

org.apache.spark.SparkException: Job aborted due to stage failure: Task 22 in stage 1.0 failed 4 times, most recent failure: Lost task 22.3 in stage 1.0 (TID 77, ip-172-31-97-24.us-west-2.compute.internal): java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869)
    at org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103)
    at org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1287)
    at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105)
    at org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:439)
    at org.apache.spark.storage.BlockManager.get(BlockManager.scala:604)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:661)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:281)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

PySpark代码:

raw_rdd = spark_context.textFile(full_source_path)

# DEBUG: This call to count() is expensive
# This count succeeds and returns 158,598,155
logger.info("raw_rdd count = %d", raw_rdd.count())
logger.info("completed getting raw_rdd count!!!!!!!")

row_rdd = raw_rdd.map(row_parse_function).filter(bool)
data_frame = spark_sql_context.createDataFrame(row_rdd, MySchemaStructType)

data_frame.cache()
# This will trigger the Spark internal error
logger.info("row count = %d", data_frame.count())

1 个答案:

答案 0 :(得分:0)

错误不是来自data_frame.count()本身,而是因为通过row_parse_function解析行会产生一些不适合MySchemaStructType中指定整数类型的整数。

尝试将模式中的整数类型增加到pyspark.sql.types.LongType(),或者让spark通过省略模式来推断类型(但这会降低评估速度)。