运行简单的PySpark示例失败

时间:2015-07-28 08:16:40

标签: apache-spark pyspark

我们正在开始使用Spark(使用PySpark),我们在使用Ubuntu Server 14.04 LTS虚拟机以及Java版本" 1.8.0_45"的VMware ESX 5.5环境中遇到问题。

运行简单的sc.parallelize(['2', '4']).collect()会产生以下结果:

15/07/28 10:11:42 INFO SparkContext: Starting job: collect at <stdin>:1
15/07/28 10:11:42 INFO DAGScheduler: Got job 0 (collect at <stdin>:1) with 2 output partitions (allowLocal=false)
15/07/28 10:11:42 INFO DAGScheduler: Final stage: ResultStage 0(collect at <stdin>:1)
15/07/28 10:11:42 INFO DAGScheduler: Parents of final stage: List()
15/07/28 10:11:42 INFO DAGScheduler: Missing parents: List()
15/07/28 10:11:42 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at parallelize at PythonRDD.scala:396), which has no missing parents
15/07/28 10:11:42 INFO TaskSchedulerImpl: Cancelling stage 0
15/07/28 10:11:42 INFO DAGScheduler: ResultStage 0 (collect at <stdin>:1) failed in Unknown s
15/07/28 10:11:42 INFO DAGScheduler: Job 0 failed: collect at <stdin>:1, took 0,058933 s
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/spark/spark/python/pyspark/rdd.py", line 745, in collect
    port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
  File "/opt/spark/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/opt/spark/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task serialization failed: java.lang.reflect.InvocationTargetException
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.lang.reflect.Constructor.newInstance(Constructor.java:422)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:68)
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:60)
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:80)
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
org.apache.spark.SparkContext.broadcast(SparkContext.scala:1289)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:874)
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:815)
org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:799)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1419)
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:884)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:815)
    at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:799)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1419)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

发现有关相同行为的问题:https://issues.apache.org/jira/browse/SPARK-9089

知道发生了什么?或者我们可以尝试什么?

1 个答案:

答案 0 :(得分:1)

正如问题所述:

  

我们已经面临同样的问题,经过挖掘和大量工作   幸运的是,我们找到了问题的根源。

     

这是因为snappy-java提取本机库   java.io.tempdir(默认为/ tmp)并将可执行标志设置为   提取文件。如果你使用&#34; noexec&#34;来安装/ tmp;选项,   snappy-java无法设置可执行标志并将引发一个   例外。请参阅SnappyLoader.java代码。

     

我们通过不使用选项&#34; noexec&#34;来修复此问题。安装时   / tmp中。

     

Sean Owen如果你想重现这个问题,可以使用&#34; noexec&#34;来挂载/ tmp。   选项或将java.io.tempdir设置为使用的目录   &#34; NOEXEC&#34;

     

也许Spark设置属性会更好   org.xerial.snappy.tempdir为spark.local.dir的值,但没有   防止spark.local.dir可以安装为&#34; noexec&#34;还

noexec挂载点删除/tmp解决了此问题。