Spark Driver堆内存问题

时间:2016-12-15 17:59:21

标签: apache-spark

我看到的问题是我在主节点上慢慢耗尽Java堆。下面是我创建的一个简单的例子,它只重复了200次。使用下面的设置,主机在大约1小时内耗尽内存,并出现以下错误:

16/12/15 17:55:46 INFO YarnSchedulerBackend$YarnDriverEndpoint: Launching task 97578 on executor id: 9 hostname: ip-xxx-xxx-xx-xx
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="kill -9 %p"
#   Executing /bin/sh -c "kill -9 20160"...

守则:

import org.apache.spark.sql.functions._
import org.apache.spark._

object MemTest {

 case class X(colval: Long, colname: Long, ID: Long)

 def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("MemTest")
    val spark = new SparkContext(conf)

    val sc = org.apache.spark.sql.SQLContext.getOrCreate(spark)
    import sc.implicits._;

    for( a <- 1 to 200)
    {
      var df = spark.parallelize((1 to 5000000).map(x => X(x.toLong, x.toLong % 10, x.toLong / 10 ))).toDF()
      df = df.groupBy("ID").pivot("colname").agg(max("colval"))
      df.count
    }

    spark.stop()
  }
}

我使用m4.xlarge(4个节点+ 1个主节点)在AWS emr-5.1.0上运行。这是我的火花设置

{
  "Classification": "spark-defaults",
  "Properties": {
    "spark.dynamicAllocation.enabled": "false",
    "spark.executor.instances": "16",
    "spark.executor.memory": "2560m",
    "spark.driver.memory": "768m",
    "spark.executor.cores": "1"
  }
},
{
    "Classification": "spark",
    "Properties": {
      "maximizeResourceAllocation": "false"
    }
},

我用sbt编译

name := "Simple Project"

version := "1.0"

scalaVersion := "2.11.7"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "2.0.2" % "provided",
  "org.apache.spark" %% "spark-sql" % "2.0.2")

然后使用

运行它
spark-submit --class MemTest target/scala-2.11/simple-project_2.11-1.0.jar

使用jmap -histo查看内存时,我看到java.lang.Longscala.Tuple2一直在增长。

1 个答案:

答案 0 :(得分:0)

您确定群集上安装的spark版本是2.0.2吗?

或者,如果群集中有多个Spark安装,您确定要调用正确的(2.0.2)spark-submit吗?

(我很遗憾无法发表评论,以便我将此作为答案发布的原因)