使用boto3读取大文件时,PySpark会抛出java.io.EOFException

时间:2015-12-04 11:05:48

标签: amazon-s3 pyspark eofexception boto3

我正在使用boto3从S3读取文件,这显示比sc.textFile(...)快得多。这些文件大约在300MB到1GB之间。过程如下:

data = sc.parallelize(list_of_files, numSlices=n_partitions) \
    .flatMap(read_from_s3_and_split_lines)

events = data.aggregateByKey(...)

运行此过程时,我得到例外:

15/12/04 10:58:00 WARN TaskSetManager: Lost task 41.3 in stage 0.0 (TID 68, 10.83.25.233): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:203)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:342)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:392)
    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:139)
    ... 15 more

很多时候,只是某些任务崩溃而且工作能够恢复。但是,有时整个作业会在发生一些错误后崩溃。我无法找到这个问题的根源,并且似乎根据我读取的文件数量而出现和消失,我应用的确切转换...在读取单个文件时永远不会失败。

1 个答案:

答案 0 :(得分:2)

我遇到过类似的问题,我的调查显示问题是Python进程缺少可用内存。 Spark已经占用了所有内存和Python进程(PySpark工作的地方)崩溃。

一些建议:

  1. 为机器添加一些内存,
  2. 解决不需要的RDD,
  3. 管理内存更智能(在Spark内存使用上添加一些限制)。