使用rdd.pipe

时间:2016-07-05 07:34:55

标签: apache-spark rdd

使用rdd.pipe(command)时,子进程的错误不会返回到主进程。例如,如果有人:

sc.parallelize(Range(0, 10)).pipe("ls fileThatDontExist").collect

然后堆栈跟踪如下:

java.lang.Exception: Subprocess exited with status 1
    at org.apache.spark.rdd.PipedRDD$$anon$1.hasNext(PipedRDD.scala:161)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at org.apache.spark.rdd.PipedRDD$$anon$1.foreach(PipedRDD.scala:153)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
    at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
    at org.apache.spark.rdd.PipedRDD$$anon$1.to(PipedRDD.scala:153)
    at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
    at org.apache.spark.rdd.PipedRDD$$anon$1.toBuffer(PipedRDD.scala:153)
    at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
    at org.apache.spark.rdd.PipedRDD$$anon$1.toArray(PipedRDD.scala:153)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
    at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
    at org.apache.spark.scheduler.Task.run(Task.scala:70)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

这里没有提到命令中发生的错误,需要在执行程序日志中搜索以查找:

ls: fileThatDontExist: No such file or directory

检查PipedRDD的代码,似乎可以在抛出异常时添加更多信息(比如在消息中添加proc.getErrorStream的内容):

val exitStatus = proc.waitFor()
if (exitStatus != 0) {
  throw new Exception("Subprocess exited with status " + exitStatus)
}

我有两个问题。有没有理由不这样做?也有人知道缩短吗?

现在我已经封装了流程执行,这样当流程中出现错误时,我返回0并输出流程的stderr和一个标记。然后映射RDD,包含标记的行将使用stderr抛出异常。

1 个答案:

答案 0 :(得分:1)

截至目前(Spark 1.6),当前行为是执行程序的标准错误输出中生成的进程的print the stderr。这似乎是Spark自己的创造者Matei Zaharia的一个非常早期的选择,你可以看到here,可以追溯到2011年。我没有看到任何其他方式来收集当前实现中的stderr。

最近,一个更改被推送到Spark 2.0,以将任何异常从子进程传播到调用进程(请参阅SPARK-13793),并且当退出状态不同时,抛出的异常中添加了一个小的更改比0(见line)。

这可以作为改进提出,如果您需要任何帮助来建议将其作为Spark的增强功能,请告诉我。