Spark不会让我计算我加入的数据帧

时间:2015-10-16 14:50:57

标签: scala apache-spark spark-dataframe

Spark Jobs的新功能我遇到了以下问题。

当我对任何新加入的数据帧运行计数时,作业会运行很长时间并将内存溢出到磁盘。这里有逻辑错误吗?

    // pass spark configuration
    val conf = new SparkConf()
      .setMaster(threadMaster)
      .setAppName(appName)

    // Create a new spark context
    val sc = new SparkContext(conf)

    // Specify a SQL context and pass in the spark context we created
    val sqlContext = new org.apache.spark.sql.SQLContext(sc)


    // Create three dataframes for sent and clicked files. Mark them as raw, since they will be renamed
    val dfSentRaw = sqlContext.read.parquet(inputPathSent)
    val dfClickedRaw = sqlContext.read.parquet(inputPathClicked)
    val dfFailedRaw  = sqlContext.read.parquet(inputPathFailed)



    // Rename the columns to avoid ambiguity when accessing the fields later
    val dfSent = dfSentRaw.withColumnRenamed("customer_id", "sent__customer_id")
      .withColumnRenamed("campaign_id", "sent__campaign_id")
      .withColumnRenamed("ced_email", "sent__ced_email")
      .withColumnRenamed("event_captured_dt", "sent__event_captured_dt")
      .withColumnRenamed("riid", "sent__riid")


    val dfClicked = dfClickedRaw.withColumnRenamed("customer_id", "clicked__customer_id")
      .withColumnRenamed("event_captured_dt", "clicked__event_captured_dt")
    val dfFailed = dfFailedRaw.withColumnRenamed("customer_id", "failed__customer_id")


    // LEFT Join with CLICKED on two fields, customer_id and campaign_id
    val dfSentClicked = dfSent.join(dfClicked, dfSent("sent__customer_id") === dfClicked("clicked__customer_id")
      && dfSent("sent__campaign_id") === dfClicked("campaign_id"), "left")
     dfSentClicked.count() //THIS WILL NOT WORK

val dfJoined = dfSentClicked.join(dfFailed, dfSentClicked("sent__customer_id") === dfFailed("failed__customer_id")
      && dfSentClicked("sent__campaign_id") === dfFailed("campaign_id"), "left")

为什么不能再计算这两个/三个数据帧了?我是否通过重命名搞砸了一些索引?

谢谢!

enter image description here

1 个答案:

答案 0 :(得分:1)

count调用是此处Spark作业的唯一实际实现,因此它不是count不是问题,而是为{{1}进行的随机播放就在它之前。您没有足够的内存来执行连接而不会溢出到磁盘。在洗牌中溢出磁盘是一种非常简单的方法,可以让您的Spark作业永远占用=)。

真正有助于防止溢出溢出的一件事是拥有更多分区。然后,在任何给定时间,通过随机播放的数据都会减少。您可以设置join来控制Spark Sql在连接或聚合中使用的分区数。默认为200,因此您可以尝试更高的设置。 http://spark.apache.org/docs/latest/sql-programming-guide.html#other-configuration-options

您可以通过增加spark.sql.shuffle.partitions(默认为0.4)和减少spark.shuffle.memoryFraction(默认为0.6)来增加本地Spark分配的堆大小和/或增加可用于shuffle的内存部分。例如,当您进行spark.storage.memoryFraction调用时,可以使用存储分数,而您可能不关心它。

如果您倾向于完全避免泄漏,可以通过将.cache设置为spark.shuffle.spill来关闭溢出。我相信如果你的内存耗尽而需要溢出而不是默默地使用,这将会引发异常,并且可以帮助你更快地配置你的内存分配。