org.apache.spark.SparkException:作业因阶段失败而中止 - OOM异常

时间:2017-10-06 11:32:32

标签: scala apache-spark apache-spark-sql

在我的应用程序中,我使用下面的火花分区挖出一个包含500万行和151列的表,并将其保存到DISK_ONLY

   val query = "(select * from destinationlarge) as dest"
val options = Map(
  "url" -> "jdbc:mysql://IPADDRESS:3306/test?useSSL=false",
  "driver" -> "com.mysql.jdbc.Driver",
  "dbtable" -> query,
  "user" -> "root",
  "password" -> "root")

val destination = spark.read.options(options).jdbc(options("url"), options("dbtable"), "0", 1, 5, 4, new java.util.Properties()).rdd.map(_.mkString(",")).persist(StorageLevel.DISK_ONLY)

群集有5个数据节点和1个硬件配置的名称节点i3 4个核心和4 GB RAM,经过一段时间执行后,其中一个执行程序已经死了并抛出下面的错误

Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, datanode5, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 139401 ms
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1965)
at org.apache.spark.rdd.RDD.count(RDD.scala:1158)
at com.syntel.spark.sparkDVT$.main(sparkDVT.scala:68)
at com.syntel.spark.sparkDVT.main(sparkDVT.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
此链接中建议的

lowerbound=1upperbound=5number of partitions is 4https://www.dezyre.com/article/how-data-partitioning-in-spark-helps-achieve-more-parallelism/297)核心总数等于所有节点中4核心的分区数量4个分区。

spark-submit

spark-submit --class "com.syntel.spark.sparkDVT" --master yarn --jars --executor-memory 512m --executor-cores 1 --num-executors 5 /root/sparkdvtmysql_2.11-1.0.jar

如果我错了,请纠正我

谢谢

1 个答案:

答案 0 :(得分:2)

我建议您按原样使用DataFame(在Spark 2.0中,即DataSet [Row]),因为 DataSet使用编码器,因此它的内存占用量比RDD 少。

val destination = spark.read
    .options(options)
    .format("jdbc")
    .load()

如果您想要通过分隔符设置concat列,可以使用concat_ws() - example here

destination
  .withColumn("column", concat_ws(", ", 
     destination.columns.map(destination.col(_)).toSeq : _*))
  .select("id, column") // id will be used for subtraction with other df
  .persist(StorageLevel.DISK_ONLY)

检查this SO post - Comaparing RDD/DF/DS您是否了解数据集与RDD的不同之处及其优势。

这可能无法完全回答您的问题。我将根据我的评论回复更新asnwer