火花地图分区以填充nan值

时间:2016-12-08 20:54:29

标签: scala apache-spark apache-spark-sql rdd

我希望使用最后一个众所周知的观察结果在spark中填充nan值 - 请参阅:Spark / Scala: fill nan with last good observation

我当前的解决方案使用窗口函数来完成任务。但这并不好,因为所有值都映射到单个分区。 val imputed: RDD[FooBar] = recordsDF.rdd.mapPartitionsWithIndex { case (i, iter) => fill(i, iter) }应该可以更好地工作。但奇怪的是我的fill函数没有被执行。我的代码出了什么问题?

+----------+--------------------+
|       foo|                 bar|
+----------+--------------------+
|2016-01-01|               first|
|2016-01-02|              second|
|      null|       noValidFormat|
|2016-01-04|lastAssumingSameDate|
+----------+--------------------+

以下是完整的示例代码:

import java.sql.Date

import org.apache.log4j.{ Level, Logger }
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession

case class FooBar(foo: Date, bar: String)

object WindowFunctionExample extends App {

  Logger.getLogger("org").setLevel(Level.WARN)
val conf: SparkConf = new SparkConf()
    .setAppName("foo")
    .setMaster("local[*]")

  val spark: SparkSession = SparkSession
    .builder()
    .config(conf)
    .enableHiveSupport()
    .getOrCreate()

  import spark.implicits._

  val myDff = Seq(("2016-01-01", "first"), ("2016-01-02", "second"),
    ("2016-wrongFormat", "noValidFormat"),
    ("2016-01-04", "lastAssumingSameDate"))
  val recordsDF = myDff
    .toDF("foo", "bar")
    .withColumn("foo", 'foo.cast("Date"))
    .as[FooBar]
  recordsDF.show

  def notMissing(row: FooBar): Boolean = {
    row.foo != null
  }

  val toCarry = recordsDF.rdd.mapPartitionsWithIndex { case (i, iter) => Iterator((i, iter.filter(notMissing(_)).toSeq.lastOption)) }.collectAsMap
  println("###################### carry ")
  println(toCarry)
  println(toCarry.foreach(println))
  println("###################### carry ")
  val toCarryBd = spark.sparkContext.broadcast(toCarry)

  def fill(i: Int, iter: Iterator[FooBar]): Iterator[FooBar] = {
    var lastNotNullRow: FooBar = toCarryBd.value(i).get
    iter.map(row => {
      if (!notMissing(row))1
        FooBar(lastNotNullRow.foo, row.bar)
      else {
        lastNotNullRow = row
        row
      }
    })
  }

  // The algorithm does not step into the for loop for filling the null values. Strange
  val imputed: RDD[FooBar] = recordsDF.rdd.mapPartitionsWithIndex { case (i, iter) => fill(i, iter) }
  val imputedDF = imputed.toDS()

  println(imputedDF.orderBy($"foo").collect.toList)
  imputedDF.show
  spark.stop
}

修改

我修复了评论中概述的代码。但toCarryBd包含None个值。如果我明确过滤

,怎么会发生这种情况
def notMissing(row: FooBar): Boolean = {row.foo != null}
iter.filter(notMissing(_)).toSeq.lastOption

None值。

(2,None)
(5,None)
(4,None)
(7,Some(FooBar(2016-01-04,lastAssumingSameDate)))
(1,Some(FooBar(2016-01-01,first)))
(3,Some(FooBar(2016-01-02,second)))
(6,None)
(0,None)

尝试访问NoSuchElementException: None.get时会导致toCarryBd

1 个答案:

答案 0 :(得分:2)

首先,如果您的foo字段可以为null,我建议将案例类创建为:

case class FooBar(foo: Option[Date], bar: String)

然后,您可以将notMissing函数重写为:

def notMissing(row: Option[FooBar]): Boolean = row.isDefined && row.get.foo.isDefined