DataFrame和DataSet-将值转换为<k,v>对

时间:2019-01-20 05:42:41

标签: apache-spark apache-spark-sql apache-spark-dataset

Sample Input (black coloured text) and Output (red coloured text)

我有一个DataFrame(一个为黑色),如何将其转换为一个红色的? (列号,值)

[已附加图片]

val df = spark.read.format("csv").option("inferSchema", "true").option("header", "true").load("file:/home/hduser/Desktop/Demo.csv")

case class Employee(EmpId: String, Experience: Double, Salary: Double)

val ds = df.as[Employee]

我需要DataFrame和DataSet方式的解决方案。

先谢谢您! :-)

1 个答案:

答案 0 :(得分:0)

我相信这是您说配对时想要的结构。检查下面的代码是否给出了预期的输出。

使用DataFrame:

import spark.sqlContext.implicits._
import org.apache.spark.sql.functions._
val data = Seq(("111",5,50000),("222",6,60000),("333",7,60000))
val df = data.toDF("EmpId","Experience","Salary")

val newdf = df.withColumn("EmpId", struct(lit("1").as("key"),col("EmpId").as("value")))
  .withColumn("Experience", struct(lit("2").as("key"),col("Experience").as("value")))
  .withColumn("Salary", struct(lit("3").as("key"),col("Salary").as("value")))
  .show(false)

输出:

+--------+----------+----------+
|EmpId   |Experience|Salary    |
+--------+----------+----------+
|[1, 111]|[2, 5]    |[3, 50000]|
|[1, 222]|[2, 6]    |[3, 60000]|
|[1, 333]|[2, 7]    |[3, 60000]|
+--------+----------+----------+

使用数据集:

首先,您需要为新结构定义案例类,否则您将无法创建数据集

case class Employee2(EmpId: EmpData, Experience: EmpData, Salary: EmpData)
case class EmpData(key: String,value:String)

val ds = df.as[Employee]
val newDS = ds.map(rec=>{
  (EmpData("1",rec.EmpId), EmpData("2",rec.Experience.toString),EmpData("3",rec.Salary.toString))
})
val finalDS = newDS.toDF("EmpId","Experience","Salary").as[Employee2]
finalDS.show(false)

输出:

+--------+--------+------------+
|EmpId   |Experience|Salary    |
+--------+--------+------------+
|[1, 111]|[2, 5]  |[3, 50000]  |
|[1, 222]|[2, 6]  |[3, 60000]  |
|[1, 333]|[2, 7]  |[3, 60000]  |
+--------+--------+------------+

谢谢