spark将函数并行应用于列

时间:2017-01-02 10:45:54

标签: scala apache-spark parallel-processing apache-spark-sql

Spark将并行处理数据,但不会处理操作。在我的DAG中,我想调用每列的函数 Spark processing columns in parallel每列的值可以独立于其他列计算。有没有办法通过spark-SQL API实现这种并行性?利用窗口函数Spark dynamic DAG is a lot slower and different from hard coded DAG有助于大量优化DAG,但只能以串行方式执行。

可以找到包含更多信息的示例https://github.com/geoHeil/sparkContrastCoding

以下最低限度示例:

val df = Seq(
    (0, "A", "B", "C", "D"),
    (1, "A", "B", "C", "D"),
    (0, "d", "a", "jkl", "d"),
    (0, "d", "g", "C", "D"),
    (1, "A", "d", "t", "k"),
    (1, "d", "c", "C", "D"),
    (1, "c", "B", "C", "D")
  ).toDF("TARGET", "col1", "col2", "col3TooMany", "col4")

val inputToDrop = Seq("col3TooMany")
val inputToBias = Seq("col1", "col2")

val targetCounts = df.filter(df("TARGET") === 1).groupBy("TARGET").agg(count("TARGET").as("cnt_foo_eq_1"))
val newDF = df.toDF.join(broadcast(targetCounts), Seq("TARGET"), "left")
  newDF.cache
def handleBias(df: DataFrame, colName: String, target: String = target) = {
    val w1 = Window.partitionBy(colName)
    val w2 = Window.partitionBy(colName, target)

    df.withColumn("cnt_group", count("*").over(w2))
      .withColumn("pre2_" + colName, mean(target).over(w1))
      .withColumn("pre_" + colName, coalesce(min(col("cnt_group") / col("cnt_foo_eq_1")).over(w1), lit(0D)))
      .drop("cnt_group")
  }

val joinUDF = udf((newColumn: String, newValue: String, codingVariant: Int, results: Map[String, Map[String, Seq[Double]]]) => {
    results.get(newColumn) match {
      case Some(tt) => {
        val nestedArray = tt.getOrElse(newValue, Seq(0.0))
        if (codingVariant == 0) {
          nestedArray.head
        } else {
          nestedArray.last
        }
      }
      case None => throw new Exception("Column not contained in initial data frame")
    }
  })

现在我想将handleBias函数应用于所有列,遗憾的是,这不是并行执行的。

val res = (inputToDrop ++ inputToBias).toSet.foldLeft(newDF) {
    (currentDF, colName) =>
      {
        logger.info("using col " + colName)
        handleBias(currentDF, colName)
      }
  }
    .drop("cnt_foo_eq_1")

val combined = ((inputToDrop ++ inputToBias).toSet).foldLeft(res) {
    (currentDF, colName) =>
      {
        currentDF
          .withColumn("combined_" + colName, map(col(colName), array(col("pre_" + colName), col("pre2_" + colName))))
      }
  }

val columnsToUse = combined
    .select(combined.columns
      .filter(_.startsWith("combined_"))
      map (combined(_)): _*)

val newNames = columnsToUse.columns.map(_.split("combined_").last)
val renamed = columnsToUse.toDF(newNames: _*)

val cols = renamed.columns
val localData = renamed.collect

val columnsMap = cols.map { colName =>
    colName -> localData.flatMap(_.getAs[Map[String, Seq[Double]]](colName)).toMap
}.toMap

1 个答案:

答案 0 :(得分:2)

  

每列的值可以独立于其他列计算

虽然这是真的,但它确实对你的情况没有帮助。您可以生成许多独立的DataFrames,每个都有自己的添加项,但这并不意味着您可以自动将其组合到单个执行计划中。

handleBias的每个应用程序将您的数据混洗两次,输出DataFrames与父DataFrame的数据分布不同。这就是为什么当您fold超过列列表时,必须单独执行每个添加。

理论上您可以设计一个可以表达的管道(使用伪代码),如下所示:

  • 添加唯一ID:

    df_with_id = df.withColumn("id", unique_id())
    
  • 独立计算每个df并转换为 wide 格式:

    dfs = for (c in columns) 
      yield handle_bias(df, c).withColumn(
        "pres", explode([(pre_name, pre_value), (pre2_name, pre2_value)])
      )
    
  • 联合所有部分结果:

    combined = dfs.reduce(union)
    
  • pivot可从长格式转换为宽格式:

    combined.groupBy("id").pivot("pres._1").agg(first("pres._2"))
    

但我怀疑这值得大惊小怪。您使用的过程非常繁重,需要大量的网络和磁盘IO。

如果总等级数(sum count(distinct x)) for x in columns))相对较低,您可以尝试使用例如aggregateByKeyMap[Tuple2[_, _], StatCounter]一次计算所有统计信息,否则请考虑下采样到你可以在本地计算统计数据。