基于第一个数据集

时间:2018-06-05 23:01:16

标签: scala apache-spark apache-spark-sql

我有两个spark数据集,一个是带有accountid和key的列,一个是数组[key1,key2,key3 ..]格式的键列,另一个是带有两列accountid和键值的数据集,位于json中。 accountid,{key:value,key,value ...}。如果在第一个数据集中显示密钥,我需要更新第二个数据集中的值。

   import org.apache.spark.sql.functions._
val df= sc.parallelize(Seq(("20180610114049", "id1","key1"),
  ("20180610114049", "id2","key2"),
  ("20180610114049", "id1","key1"),
  ("20180612114049", "id2","key1"),
  ("20180613114049", "id3","key2"),
  ("20180613114049", "id3","key3")
 )).toDF("date","accountid", "key")
val gp=df.groupBy("accountid","date").agg(collect_list("key"))

    +---------+--------------+-----------------+
|accountid|          date|collect_list(key)|
+---------+--------------+-----------------+
|      id2|20180610114049|           [key2]|
|      id1|20180610114049|     [key1, key1]|
|      id3|20180613114049|     [key2, key3]|
|      id2|20180612114049|           [key1]|
+---------+--------------+-----------------+


val df2= sc.parallelize(Seq(("20180610114049", "id1","{'key1':'0.0','key2':'0.0','key3':'0.0'}"),
  ("20180610114049", "id2","{'key1':'0.0','key2':'0.0','key3':'0.0'}"),
  ("20180611114049", "id1","{'key1':'0.0','key2':'0.0','key3':'0.0'}"),
  ("20180612114049", "id2","{'key1':'0.0','key2':'0.0','key3':'0.0'}"),
  ("20180613114049", "id3","{'key1':'0.0','key2':'0.0','key3':'0.0'}")
 )).toDF("date","accountid", "result")

+--------------+---------+----------------------------------------+
|date          |accountid|result                                  |
+--------------+---------+----------------------------------------+
|20180610114049|id1      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
|20180610114049|id2      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
|20180611114049|id1      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
|20180612114049|id2      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
|20180613114049|id3      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
+--------------+---------+----------------------------------------+

预期产出

+--------------+---------+----------------------------------------+
|date          |accountid|result                                  |
+--------------+---------+----------------------------------------+
|20180610114049|id1      |{'key1':'1.0','key2':'0.0','key3':'0.0'}|
|20180610114049|id2      |{'key1':'0.0','key2':'1.0','key3':'0.0'}|
|20180611114049|id1      |{'key1':'0.0','key2':'0.0','key3':'0.0'}|
|20180612114049|id2      |{'key1':'1.0','key2':'0.0','key3':'0.0'}|
|20180613114049|id3      |{'key1':'0.0','key2':'1.0','key3':'1.0'}|
+--------------+---------+----------------------------------------+

2 个答案:

答案 0 :(得分:1)

udf两个数据帧之后,您可以使用join函数来实现您的要求。当然有一些东西,比如c 将json转换为struct,struct to json,case class usage和更多(提供注释以供进一步解释)

import org.apache.spark.sql.functions._

//aliasing the collected key
val gp = df.groupBy("accountid","date").agg(collect_list("key").as("keys"))

//schema for converting json to struct
val schema = StructType(Seq(StructField("key1", StringType, true), StructField("key2", StringType, true), StructField("key3", StringType, true)))

//udf function to update the values of struct where result is a case class
def updateKeysUdf = udf((arr: Seq[String], json: Row) => Seq(json.schema.fieldNames.map(key => if(arr.contains(key)) "1.0" else json.getAs[String](key))).collect{case Array(a,b,c) => result(a,b,c)}.toList(0))

//changing json string to stuct using the above schema
df2.withColumn("result", from_json(col("result"), schema))
  .as("df2")   //aliasing df2 for joining and selecting
    .join(gp.as("gp"), col("df2.accountid") === col("gp.accountid"), "left")   //aliasing gp dataframe and joining with accountid
    .select(col("df2.accountid"), col("df2.date"), to_json(updateKeysUdf(col("gp.keys"), col("df2.result"))).as("result"))  //selecting and calling above udf function and finally converting to json stirng
  .show(false)

其中结果为case class

case class result(key1: String, key2: String, key3: String)

应该给你

+---------+--------------+----------------------------------------+
|accountid|date          |result                                  |
+---------+--------------+----------------------------------------+
|id3      |20180613114049|{"key1":"0.0","key2":"1.0","key3":"1.0"}|
|id1      |20180610114049|{"key1":"1.0","key2":"0.0","key3":"0.0"}|
|id1      |20180611114049|{"key1":"1.0","key2":"0.0","key3":"0.0"}|
|id2      |20180610114049|{"key1":"0.0","key2":"1.0","key3":"0.0"}|
|id2      |20180610114049|{"key1":"1.0","key2":"0.0","key3":"0.0"}|
|id2      |20180612114049|{"key1":"0.0","key2":"1.0","key3":"0.0"}|
|id2      |20180612114049|{"key1":"1.0","key2":"0.0","key3":"0.0"}|
+---------+--------------+----------------------------------------+

我希望答案很有帮助

答案 1 :(得分:1)

你肯定需要一个UDF来干净利落地完成它。

在加入Service_User_IDdate之后,您可以将数组和JSON都传递给UDF,使用您选择的解析器解析UDF中的JSON(我使用JSON4S)在示例中),检查数组中是否存在该键,然后更改该值,再次将其转换为JSON并从UDF返回。

accountid