神秘缺失/ tmp文件上的Spark / PySpark错误

时间:2015-06-04 23:48:27

标签: apache-spark runtime-error pyspark

我遇到了pyspark和缺少/ tmp文件的问题。我已将行为缩小到一个简短的片段。

>>> a=sc.parallelize([(16646160,1)])
>>> b=stuff
>>> # b=sc.parallelize(b.collect())
>>> a.join(b).take(10)

这会失败,但如果我包含注释行(应该是同一个东西),那么它会成功。这是错误:

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-101-90fe86df7879> in <module>()
      3 b=stuff.map(lambda x:(16646160,1))
      4 #b=sc.parallelize(b.collect())
----> 5 a.join(b).take(10)
      6 b.take(10)

/usr/lib/spark/python/pyspark/rdd.py in take(self, num)
   1109 
   1110             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1111             res = self.context.runJob(self, takeUpToNumLeft, p, True)
   1112 
   1113             items += res

/usr/lib/spark/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
    816         # SparkContext#runJob.
    817         mappedRDD = rdd.mapPartitions(partitionFunc)
--> 818         it = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, javaPartitions, allowLocal)
    819         return list(mappedRDD._collect_iterator_through_file(it))
    820 

/usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
    536         answer = self.gateway_client.send_command(command)
    537         return_value = get_return_value(answer, self.gateway_client,
--> 538                 self.target_id, self.name)
    539 
    540         for temp_arg in temp_args:

/usr/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    298                 raise Py4JJavaError(
    299                     'An error occurred while calling {0}{1}{2}.\n'.
--> 300                     format(target_id, '.', name), value)
    301             else:
    302                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 210.0 failed 1 times, most recent failure: Lost task 1.0 in stage 210.0 (TID 884, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/lib/spark/python/pyspark/worker.py", line 92, in main
    command = pickleSer.loads(command.value)
  File "/usr/lib/spark/python/pyspark/broadcast.py", line 106, in value
    self._value = self.load(self._path)
  File "/usr/lib/spark/python/pyspark/broadcast.py", line 87, in load
    with open(path, 'rb', 1 << 20) as f:
IOError: [Errno 2] No such file or directory: '/tmp/spark-4a8c591e-9192-4198-a608-c7daa3a5d494/tmpuzsAVM'

    at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:137)
    at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:174)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:96)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
    at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
    at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
    at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
    at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1468)
    at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

如果你想知道

>>> b.take(10)

[(16744491, 1),
 (16203827, 1),
 (16695357, 1),
 (16958298, 1),
 (16400458, 1),
 (16810060, 1),
 (11452497, 1),
 (14803033, 1),
 (15630426, 1),
 (14917736, 1)]

所以也许(我想)那里有一些奇怪的数字溢出或者什么东西,收集和重新并行化“修复”问题。下一段代码证明了这种假设是错误的。

>>> a=sc.parallelize([(16646160,1)])
>>> b=stuff.map(lambda x:(16646160,1))
>>> #b=sc.parallelize(b.collect())
>>> a.join(b).take(10)

它仍然破裂。 (这里再次包括注释行修复了问题。)

所以我显然在看某种spark / pyspark错误。 Spark 1.2.0。有什么想法吗?

0 个答案:

没有答案