Spark DataFrame更新值

时间:2017-10-05 09:28:31

标签: python apache-spark dataframe pyspark spark-dataframe

我有3个DataFrame:

1. Item dataframe:

+-------+---------+
|id_item|item_code|
+-------+---------+
|    991|    A0049|
|    992|    C1248|
|    993|    C0860|
|    994|    C0757|
|    995|    C0682|
+-------+---------+

2. User dataframe:

+------+--------+
|id_usn|     usn|
+------+--------+
|417567|39063291|
|417568|39063294|
|417569|39063334|
|417570|39063353|
|417571|39063376|
+------+--------+

3. Summary dataframe

+-------+--------------------+
|id_item|     summary        |
+-------+--------------------+
|    991|[[417567,0.579901...|
|    992|[[417567,0.001029...|
|    443|[[417585,0.219624...|
+-------+--------------------+

and schema of this dataFrame:

root
 |-- id_item: integer (nullable = true)
 |-- summary: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id_usn: long (nullable = true)
 |    |    |-- rating: double (nullable = true)

现在,在StructType中 id_usn ,我希望在User DataFrame中通过 usn 在Summary DataFrame中替换 id_usn

我正在使用Spark!

请解决这个问题!

1 个答案:

答案 0 :(得分:1)

希望它有所帮助。

 from pyspark.sql import functions as F

 sdf1 = summarydf.select('id_item','summary',F.explode('summary').alias('col_summary')).select('*',F.col('col_summary').id_usn.alias('id_usn'),F.col('col_summary').rating.alias('rating')).drop('col_summary')
 df  = sdf1.join(itemdf,'id_item').join(userdf,'id_usn').select('item_code',F.struct('usn','rating').alias('tmpcol')).groupby('item_code').agg(F.collect_list('tmpcol').alias('summary'))
+---------+--------------------+
|item_code|             summary|
+---------+--------------------+
|    C1248|[[39063291,0.0010...|
|    A0049|[[39063291,0.5799...|
+---------+--------------------+