使用嵌套字段更新数据框-Spark

时间:2019-04-23 17:26:26

标签: scala apache-spark dataframe hadoop apache-spark-sql

我有两个如下所示的数据框

Df1

    +----------------------+---------+
    |products              |visitorId|
    +----------------------+---------+
    |[[i1,0.68], [i2,0.42]]|v1       |
    |[[i1,0.78], [i3,0.11]]|v2       |
    +----------------------+---------+

Df2

+---+----------+
| id|      name|
+---+----------+
| i1|Nike Shoes|
| i2|  Umbrella|
| i3|     Jeans|
+---+----------+

这是数据帧Df1的架构

root
 |-- products: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: string (nullable = true)
 |    |    |-- interest: double (nullable = true)
 |-- visitorId: string (nullable = true)

我想加入两个数据框,以便输出为

+------------------------------------------+---------+
|products                                  |visitorId|
+------------------------------------------+---------+
|[[i1,0.68,Nike Shoes], [i2,0.42,Umbrella]]|v1       |
|[[i1,0.78,Nike Shoes], [i3,0.11,Jeans]]   |v2       |
+------------------------------------------+---------+

这是我期望的输出模式

root
     |-- products: array (nullable = true)
     |    |-- element: struct (containsNull = true)
     |    |    |-- id: string (nullable = true)
     |    |    |-- interest: double (nullable = true)
     |    |    |-- name: double (nullable = true)
     |-- visitorId: string (nullable = true)

我如何在Scala中做到这一点?我正在使用Spark 2.2.0。

更新

我爆炸了,加入了上面的数据框,得到了下面的输出。

+---------+---+--------+----------+
|visitorId| id|interest|      name|
+---------+---+--------+----------+
|       v1| i1|    0.68|Nike Shoes|
|       v1| i2|    0.42|  Umbrella|
|       v2| i1|    0.78|Nike Shoes|
|       v2| i3|    0.11|     Jeans|
+---------+---+--------+----------+

现在,我只需要下面的json格式的上面的数据框。

{
    "visitorId": "v1",
    "products": [{
         "id": "i1",
         "name": "Nike Shoes",
         "interest": 0.68
    }, {
         "id": "i2",
         "name": "Umbrella",
         "interest": 0.42
    }]
},
{
    "visitorId": "v2",
    "products": [{
         "id": "i1",
         "name": "Nike Shoes",
         "interest": 0.78
    }, {
         "id": "i3",
         "name": "Jeans",
         "interest": 0.11
    }]
}

2 个答案:

答案 0 :(得分:3)

尝试一下。

scala> val df1 = Seq((Seq(("i1",0.68),("i2",0.42)), "v1"), (Seq(("i1",0.78),("i3",0.11)), "v2")).toDF("products", "visitorId" )
df: org.apache.spark.sql.DataFrame = [products: array<struct<_1:string,_2:double>>, visitorId: string]

scala> df1.show(false)
+------------------------+---------+
|products                |visitorId|
+------------------------+---------+
|[[i1, 0.68], [i2, 0.42]]|v1       |
|[[i1, 0.78], [i3, 0.11]]|v2       |
+------------------------+---------+

scala> val df2 = Seq(("i1", "Nike Shoes"),("i2", "Umbrella"), ("i3", "Jeans")).toDF("id", "name")
df2: org.apache.spark.sql.DataFrame = [id: string, name: string]

scala> df2.show(false)
+---+----------+
|id |name      |
+---+----------+
|i1 |Nike Shoes|
|i2 |Umbrella  |
|i3 |Jeans     |
+---+----------+


scala> val withProductsDF = df1.withColumn("individualproducts", explode($"products")).select($"visitorId",$"products",$"individualproducts._1" as "id", $"individualproducts._2" as "interest")
withProductsDF: org.apache.spark.sql.DataFrame = [visitorId: string, products: array<struct<_1:string,_2:double>> ... 2 more fields]

scala> withProductsDF.show(false)
+---------+------------------------+---+--------+
|visitorId|products                |id |interest|
+---------+------------------------+---+--------+
|v1       |[[i1, 0.68], [i2, 0.42]]|i1 |0.68    |
|v1       |[[i1, 0.68], [i2, 0.42]]|i2 |0.42    |
|v2       |[[i1, 0.78], [i3, 0.11]]|i1 |0.78    |
|v2       |[[i1, 0.78], [i3, 0.11]]|i3 |0.11    |
+---------+------------------------+---+--------+


scala> val withProductNamesDF = withProductsDF.join(df2, "id")
withProductNamesDF: org.apache.spark.sql.DataFrame = [id: string, visitorId: string ... 3 more fields]

scala> withProductNamesDF.show(false)
+---+---------+------------------------+--------+----------+
|id |visitorId|products                |interest|name      |
+---+---------+------------------------+--------+----------+
|i1 |v2       |[[i1, 0.78], [i3, 0.11]]|0.78    |Nike Shoes|
|i1 |v1       |[[i1, 0.68], [i2, 0.42]]|0.68    |Nike Shoes|
|i2 |v1       |[[i1, 0.68], [i2, 0.42]]|0.42    |Umbrella  |
|i3 |v2       |[[i1, 0.78], [i3, 0.11]]|0.11    |Jeans     |
+---+---------+------------------------+--------+----------+


scala> val outputDF = withProductNamesDF.groupBy("visitorId").agg(collect_list(struct($"id", $"name", $"interest")) as  "products")
outputDF: org.apache.spark.sql.DataFrame = [visitorId: string, products: array<struct<id:string,name:string,interest:double>>]

scala> outputDF.toJSON.show(false)
+-----------------------------------------------------------------------------------------------------------------------------+
|value                                                                                                                        |
+-----------------------------------------------------------------------------------------------------------------------------+
|{"visitorId":"v2","products":[{"id":"i1","name":"Nike Shoes","interest":0.78},{"id":"i3","name":"Jeans","interest":0.11}]}   |
|{"visitorId":"v1","products":[{"id":"i1","name":"Nike Shoes","interest":0.68},{"id":"i2","name":"Umbrella","interest":0.42}]}|
+-----------------------------------------------------------------------------------------------------------------------------+

答案 1 :(得分:2)

取决于您的特定情况,但是如果碰巧df2查找表足够小,则可以尝试将其收集为Scala映射以在UDF中使用。这样就变得简单了:

val m = df2.as[(String, String)].collect.toMap

val addName = udf( (arr: Seq[Row]) => {
    arr.map(i => (i.getAs[String](0), i.getAs[Double](1), m(i.getAs[String](0))))
})

df1.withColumn("products", addName('products)).show(false)

+------------------------------------------+---------+
|products                                  |visitorId|
+------------------------------------------+---------+
|[[i1,0.68,Nike Shoes], [i2,0.42,Umbrella]]|v1       |
|[[i1,0.78,Nike Shoes], [i3,0.11,Jeans]]   |v2       |
+------------------------------------------+---------+