使用spark& amp膨胀数据集斯卡拉

时间:2015-04-06 13:49:49

标签: scala apache-spark

这是我的要求

输入

customer_id status  start_date  end_date
1   Y   20140101    20140105
2   Y   20140201    20140203

输出

customer_id status  date
1   Y   20140101
1   Y   20140102
1   Y   20140103
1   Y   20140104
1   Y   20140105
2   Y   20140201
2   Y   20140202
2   Y   20140202

我正试图通过火花中的笛卡尔积来实现这一目标,而且效率非常低。我的数据集太大了。我正在寻找更好的选择。

1 个答案:

答案 0 :(得分:1)

如果我的想法正确,你可以这样做:

  val conf = new SparkConf().setMaster("local[2]").setAppName("test")
  val sc = new SparkContext(conf)

  case class Input(customerId: Long, status: String, startDate: LocalDate, endDate: LocalDate)
  case class Output(customerId: Long, status: String, date: LocalDate)

  val input: RDD[Input] = sc.parallelize(Seq(
    Input(1, "Y", LocalDate.of(2014, 1, 1), LocalDate.of(2014, 1, 5)),
    Input(2, "Y", LocalDate.of(2014, 1, 1), LocalDate.of(2014, 1, 3))
  ))

  val result: RDD[Output] = input flatMap { input =>
    import input._
    val dates = Stream.iterate(startDate)(_.plusDays(1)).takeWhile(!_.isAfter(endDate))
    dates.map(date => Output(customerId, status, date))
  }

  result.collect().foreach(println)

输出:

Output(1,Y,2014-01-01)
Output(1,Y,2014-01-02)
Output(1,Y,2014-01-03)
Output(1,Y,2014-01-04)
Output(1,Y,2014-01-05)
Output(2,Y,2014-01-01)
Output(2,Y,2014-01-02)
Output(2,Y,2014-01-03)