Spark(使用Scala)抛出SocketException:连接重置和SocketTimeoutException:读取超时

时间:2015-03-12 21:53:43

标签: scala apache-spark socketexception socket-timeout-exception

我正在尝试将(非常大的)序列化RDD对象加载到ec2节点集群的内存中,然后对这些对象进行一些提取并将生成的RDD存储在磁盘上(作为目标文件)。不幸的是,我有一次SocketException: Connection resetSocketTimeoutException: Read timed out几次。

以下是我的代码的相关部分:

val pairsLocation = args(0)
val pairsRDD = sc.objectFile[Pair](pairLocation)
// taking individual objects out of "Pair" objects (containing two of those simple objects)
val extracted = pairsRDD.filter(myFunc(_._1)).
      flatMap(x => List(x._1, x._2)).distinct
val savePath = "s3 URI"
extracted.saveAsObjectFile(savePath)

以下是我得到的错误(警告)的详细信息:

15/03/12 18:40:27 WARN scheduler.TaskSetManager: Lost task 574.0 in stage 0.0 (TID 574, ip-10-45-14-27.us-west-2.compute.internal): 
java.net.SocketException: Connection reset
  at java.net.SocketInputStream.read(SocketInputStream.java:196)
  at java.net.SocketInputStream.read(SocketInputStream.java:122)
  at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
  at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:554)
  at sun.security.ssl.InputRecord.read(InputRecord.java:509)
  at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
  at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
  at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
  at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:170)
  at java.io.FilterInputStream.read(FilterInputStream.java:133)
  at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:108)
  at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
  at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
  at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:98)
  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
  at java.io.DataInputStream.readFully(DataInputStream.java:195)
  at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
  at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
  at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
  at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2120)
  at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:76)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:244)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:210)
  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
  at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:202)
  at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:58)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  at org.apache.spark.scheduler.Task.run(Task.scala:56)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)


15/03/12 18:42:16 WARN scheduler.TaskSetManager: Lost task 380.0 in stage 0.0 (TID 380, ip-10-47-3-111.us-west-2.compute.internal):
 java.net.SocketTimeoutException: Read timed out
  at java.net.SocketInputStream.socketRead0(Native Method)
  at java.net.SocketInputStream.read(SocketInputStream.java:152)
  at java.net.SocketInputStream.read(SocketInputStream.java:122)
  at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
  at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:554)
  at sun.security.ssl.InputRecord.read(InputRecord.java:509)
  at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934)
  at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891)
  at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
  at org.apache.commons.httpclient.ContentLengthInputStream.read(ContentLengthInputStream.java:170)
  at java.io.FilterInputStream.read(FilterInputStream.java:133)
  at org.apache.commons.httpclient.AutoCloseInputStream.read(AutoCloseInputStream.java:108)
  at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:76)
  at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:136)
  at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.read(NativeS3FileSystem.java:98)
  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
  at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
  at java.io.DataInputStream.readFully(DataInputStream.java:195)
  at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
  at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
  at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
  at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2120)
  at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:76)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:244)
  at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:210)
  at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
  at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
  at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:202)
  at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:58)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
  at org.apache.spark.scheduler.Task.run(Task.scala:56)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)

0 个答案:

没有答案