PySpark:线程stdout编写器中未捕获的python.exe异常

时间:2019-01-05 22:06:54

标签: python apache-spark pyspark pyspark-sql

我正在使用pyspark开发ETL应用程序。我已经完成了实现,并且在我的数据集上运行它时效果很好。但是,我尝试使用整个数据集(2.5 GB文本),却收到如下错误:

[Stage 137:============>(793 + 7) / 800][Stage 139:>              (0 + 1) / 800]Traceback (most recent call last):
  File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 169, in local_connect_and_auth
  File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 144, in _do_server_auth
  File "c:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 653, in loads
  File "c:\spark\python\lib\pyspark.zip\pyspark\serializers.py", line 690, in read_int
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\socket.py", line 586, in readinto
    return self._sock.recv_into(b)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "c:\spark\python\lib\pyspark.zip\pyspark\worker.py", line 290, in <module>
  File "c:\spark\python\lib\pyspark.zip\pyspark\java_gateway.py", line 172, in local_connect_and_auth
NameError: name '_exception_message' is not defined
19/01/05 10:53:28 ERROR Utils: Uncaught exception in thread stdout writer for C:\Users\username\AppData\Local\Continuum\miniconda3\python.exe
java.net.SocketException: socket already closed
    at java.net.TwoStacksPlainSocketImpl.socketShutdown(Native Method)
    at java.net.AbstractPlainSocketImpl.shutdownOutput(AbstractPlainSocketImpl.java:580)
    at java.net.PlainSocketImpl.shutdownOutput(PlainSocketImpl.java:258)
    at java.net.Socket.shutdownOutput(Socket.java:1556)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply$mcV$sp(PythonRunner.scala:263)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1$$anonfun$apply$2.apply(PythonRunner.scala:263)
    at org.apache.spark.util.Utils$.tryLog(Utils.scala:2005)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:263)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
19/01/05 10:53:28 ERROR Executor: Exception in task 797.0 in stage 137.0 (TID 24032)
java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)
19/01/05 10:53:28 ERROR Executor: Exception in task 796.0 in stage 137.0 (TID 24031)
org.apache.spark.SparkException: Python worker failed to connect back.
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:148)
    at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:76)
    at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
    at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:86)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:67)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:49)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
    at org.apache.spark.scheduler.Task.run(Task.scala:109)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Software caused connection abort: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:134)
    at java.io.DataOutputStream.writeInt(DataOutputStream.java:198)
    at org.apache.spark.security.SocketAuthHelper.writeUtf8(SocketAuthHelper.scala:96)
    at org.apache.spark.security.SocketAuthHelper.authClient(SocketAuthHelper.scala:57)
    at org.apache.spark.api.python.PythonWorkerFactory.createSimpleWorker(PythonWorkerFactory.scala:143)
    ... 31 more
19/01/05 10:53:29 ERROR TaskSetManager: Task 797 in stage 137.0 failed 1 times; aborting job
Traceback (most recent call last):
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 476, in <module>
    Main(sys.argv[1:])
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 471, in __init__
    for reportName, report in dataObj.generateReports(sqlContext):
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 443, in generateReports
    report = reportGenerator(sqlContext, commonSchema)
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 378, in generateByCycleReport
    **self.generateStats(contributionsByCycle[cycle])})
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 424, in generateStats
    stats[columnName] = aggregator(self.dataFrames['demographics'][demographicId])
  File "C:/Users/username/Desktop/etc/projectDir/Main.py", line 282, in totalContributed
    return df.agg({"amount": "sum"}).collect()[0]['sum(amount)'] or 0
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\pyspark\sql\dataframe.py", line 466, in collect
    sock_info = self._jdf.collectToPython()
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\py4j\java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Users\username\AppData\Local\Continuum\miniconda3\lib\site-packages\py4j\protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o273.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 797 in stage 137.0 failed 1 times, most recent failure: Lost task 797.0 in stage 137.0 (TID 24032, localhost, executor driver): java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2074)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
    at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:297)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3200)
    at org.apache.spark.sql.Dataset$$anonfun$collectToPython$1.apply(Dataset.scala:3197)
    at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3259)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3258)
    at org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3197)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset by peer: socket write error
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
    at java.io.DataOutputStream.write(DataOutputStream.java:107)
    at java.io.FilterOutputStream.write(FilterOutputStream.java:97)
    at org.apache.spark.api.python.PythonRDD$.org$apache$spark$api$python$PythonRDD$$write$1(PythonRDD.scala:211)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRDD$$anonfun$writeIteratorToStream$1.apply(PythonRDD.scala:223)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:223)
    at org.apache.spark.api.python.PythonRunner$$anon$2.writeIteratorToStream(PythonRunner.scala:439)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread$$anonfun$run$1.apply(PythonRunner.scala:247)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1992)
    at org.apache.spark.api.python.BasePythonRunner$WriterThread.run(PythonRunner.scala:170)

[Stage 137:============>(793 + 5) / 800][Stage 139:>              (0 + 2) / 800]

请注意,这只是发生错误的一个实例,错误本身,发生故障的位置和时间并不一致。我认为这与我的项目的设置有关,而不是与实施本身有关。错误似乎共有的唯一部分是ERROR Utils: Uncaught exception in thread stdout writer for C:\Users\username\AppData\Local\Continuum\miniconda3\python.exe

我不确定为什么会发生这种情况,因为几乎没有对我的实现的引用,追溯到我的代码的一个堆栈跟踪给出了消息java.net.SocketException: Connection reset by peer: socket write error,这是我所不了解的。

我查看了有关PySpark的其他StackOverflow问题,虽然没有找到与我的问题相匹配的问题,但似乎可伸缩性问题可以归结到配置中。这是我在每次运行中看到的配置:

spark.driver.memory: 12g
spark.driver.port: 51126
spark.executor.id: driver
spark.driver.maxResultSize: 12g
spark.memory.offHeap.size: 12g
spark.memory.offHeap.enabled: true
spark.executor.memory: 12g
spark.executor.heartbeatInterval: 36000000s
spark.executor.cores: 4
spark.driver.host: <redacted>
spark.rdd.compress: True
spark.network.timeout: 60000000s
spark.serializer.objectStreamReset: 100
spark.app.name: <redacted>
spark.master: local[*]
spark.submit.deployMode: client
spark.app.id: local-1546685579638
spark.memory.fraction: 0
spark.ui.showConsoleProgress: true

希望获得有关此问题的任何帮助,以及我的系统的详细信息:

  • Python 3.6(通过Anaconda)
  • PySpark 2.3.2(使用内置的Java类,没有本地Hadoop)
  • PyCharm CE 2018.3.1
  • Windows 10(16GB内存,8核)

1 个答案:

答案 0 :(得分:0)

我看到一个主要的套接字超时错误。 尝试将spark.executor.heartbeatInterval增加到3600s。

将此内容包含在您的代码中,在定义conf变量后的行中进行尝试。应该可以。

conf.set("spark.executor.heartbeatInterval","3600s")