基于列值迭代Spark Dataframe

时间:2017-03-27 16:57:30

标签: scala apache-spark apache-spark-sql

我在Spark中有一个带有以下数据的数据框

{ID:"1",CNT:"2", Age:"21", Class:"3"}   
{ID:"2",CNT:"3", Age:"24", Class:"5"}

我希望根据CNT值迭代数据框并生成如下输出:

{ID:"1",CNT:"1", Age:"21", Class:"3"}  
{ID:"1",CNT:"2", Age:"21", Class:"3"}  
{ID:"2",CNT:"1", Age:"24", Class:"5"}  
{ID:"2",CNT:"2", Age:"24", Class:"5"}  
{ID:"2",CNT:"3", Age:"24", Class:"5"}

有人可能知道如何实现这一目标。

2 个答案:

答案 0 :(得分:5)

您可以将数据框转换为rdd,使用flatMap展开它,然后将其转换回数据框:

val df = Seq((1,2,21,3),(2,3,24,5)).toDF("ID", "CNT", "Age", "Class")

case class Person(ID: Int, CNT: Int, Age: Int, Class: Int)

df.as[Person].rdd.flatMap(p => (1 to p.CNT).map(Person(p.ID, _, p.Age, p.Class))).toDF.show
+---+---+---+-----+
| ID|CNT|Age|Class|
+---+---+---+-----+
|  1|  1| 21|    3|
|  1|  2| 21|    3|
|  2|  1| 24|    5|
|  2|  2| 24|    5|
|  2|  3| 24|    5|
+---+---+---+-----+

答案 1 :(得分:2)

如果您更喜欢仅使用数据框的解决方案,请转到:

case class Person(ID: Int, CNT: Int, Age: Int, Class: Int)

val iterations: (Int => Array[Int]) = (input: Int) => {
  (1 to input).toArray[Int]
}
val udf_iterations = udf(iterations)

val p1 = Person(1, 2, 21, 3)
val p2 = Person(2, 3, 24, 5)

val records = Seq(p1, p2)
val df = spark.createDataFrame(records)

df.withColumn("CNT-NEW", explode(udf_iterations(col("CNT"))))
  .drop(col("CNT"))
  .withColumnRenamed("CNT-NEW", "CNT")
  .select(df.columns.map(col): _*)
  .show(false)

+---+---+---+-----+
|ID |CNT|Age|Class|
+---+---+---+-----+
|1  |1  |21 |3    |
|1  |2  |21 |3    |
|2  |1  |24 |5    |
|2  |2  |24 |5    |
|2  |3  |24 |5    |
+---+---+---+-----+