按键groupByKey或aggregateByKey进行分区后维护订单

时间:2015-08-14 11:43:30

标签: apache-spark rdd apache-spark-1.2

我有这样的日期

Machine , date , hours 
123,2014-06-15,15.4 
123,2014-06-16,20.3
123,2014-06-18,11.4 
131,2014-06-15,12.2 
131,2014-06-16,11.5
131,2014-06-17,18.2 
131,2014-06-18,19.2
134,2014-06-15,11.1
134,2014-06-16,16.2

我希望按键Machine进行分区,并找出小时滞后1个默认值0

Machine , date , hours lag
123,2014-06-15,15.4,0
123,2014-06-16,20.3,15.4
123,2014-06-18,11.4,20.3
131,2014-06-15,12.2,0
131,2014-06-16,11.5,12.2
131,2014-06-17,18.2,11.5
131,2014-06-18,19.2,18.2
134,2014-06-15,11.1,0
134,2014-06-16,16.2,11.1

我使用的是PairedRDDgroupBYKey方法,但它并没有以预期的顺序产生。

1 个答案:

答案 0 :(得分:2)

因为这里确实没有给定的订单。除了一些例外情况,如果您使用的任何转换需要改组,则RDD应被视为无序。

如果您需要特定订单,则必须手动对数据进行排序:

case class Record(machine: Long, date: java.sql.Date, hours: Double)
case class RecordWithLag(
    machine: Long, date: java.sql.Date, hours: Double, lag: Double
)

def getLag(xs: Seq[Record]): Seq[RecordWithLag] = ???

val rdd = sc.parallelize(List(
    Record(123, java.sql.Date.valueOf("2014-06-15"), 15.4), 
    Record(123, java.sql.Date.valueOf("2014-06-16"), 20.3),
    Record(123, java.sql.Date.valueOf("2014-06-18"), 11.4), 
    Record(131, java.sql.Date.valueOf("2014-06-15"), 12.2), 
    Record(131, java.sql.Date.valueOf("2014-06-16"), 11.5),
    Record(131, java.sql.Date.valueOf("2014-06-17"), 18.2), 
    Record(131, java.sql.Date.valueOf("2014-06-18"), 19.2),
    Record(134, java.sql.Date.valueOf("2014-06-15"), 11.1),
    Record(134, java.sql.Date.valueOf("2014-06-16"), 16.2)
))

rdd
  .groupBy(_.machine)
  .mapValues(_.toSeq.sortWith((x, y) => x.date.compareTo(y.date) < 0))
  .mapValues(getLag)

为了提高性能,您应该考虑将Spark分发更新为&gt; = 1.4.0并使用带有窗口函数的数据框:

val df = sqlContext.createDataFrame(rdd)
df.registerTempTable("df")
sqlContext.sql(
  """"SELECT *, lag(hours, 1, 0) OVER (
        PARTITION BY machine ORDER BY date
      ) lag FROM df"""
)

+-------+----------+-----+----+
|machine|      date|hours| lag|
+-------+----------+-----+----+
|    123|2014-06-15| 15.4| 0.0|
|    123|2014-06-16| 20.3|15.4|
|    123|2014-06-18| 11.4|20.3|
|    131|2014-06-15| 12.2| 0.0|
|    131|2014-06-16| 11.5|12.2|
|    131|2014-06-17| 18.2|11.5|
|    131|2014-06-18| 19.2|18.2|
|    134|2014-06-15| 11.1| 0.0|
|    134|2014-06-16| 16.2|11.1|
+-------+----------+-----+----+

df.select(
  $"*",
  lag($"hours", 1, 0).over(
      Window.partitionBy($"machine").orderBy($"date")
  ).alias("lag")
)