为Spark数据框中的两列中的一对值组合创建唯一的ID

时间:2019-04-06 06:47:43

标签: apache-spark pyspark pyspark-sql

我有一个六列的spark数据帧,例如(col1,col2,... col6)。我想为“ col1”和“ col2”中值的每个组合创建唯一的ID,并将其添加到数据框。有人可以通过一些pyspark代码帮助我吗?

2 个答案:

答案 0 :(得分:3)

您可以使用monotonically_increasing_id(pyspark> 1.6)或monotonicallyIncreasingId(pyspark <1.6)

>>> from pyspark.sql.functions import monotonically_increasing_id
>>> rdd=sc.parallelize([[12,23,3,4,5,6],[12,23,56,67,89,20],[12,23,0,0,0,0],[12,2,12,12,12,23],[1,2,3,4,56,7],[1,2,3,4,56,7]])
>>> df = rdd.toDF(['col_1','col_2','col_3','col_4','col_5','col_6'])
>>> df.show()
+-----+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|col_6|
+-----+-----+-----+-----+-----+-----+
|   12|   23|    3|    4|    5|    6|
|   12|   23|   56|   67|   89|   20|
|   12|   23|    0|    0|    0|    0|
|   12|    2|   12|   12|   12|   23|
|    1|    2|    3|    4|   56|    7|
|    1|    2|    3|    4|   56|    7|
+-----+-----+-----+-----+-----+-----+

>>> df_1=df.groupBy(df.col_1,df.col_2).count().withColumn("id", monotonically_increasing_id()).select(['col_1','col_2','id'])
>>> df_1.show()
+-----+-----+-------------+
|col_1|col_2|           id|
+-----+-----+-------------+
|   12|   23|  34359738368|
|    1|    2|1434519076864|
|   12|    2|1554778161152|
+-----+-----+-------------+

>>> df.join(df_1,(df.col_1==df_1.col_1) & (df.col_2==df_1.col_2)).drop(df_1.col_1).drop(df_1.col_2).show()
+-----+-----+-----+-----+-----+-----+-------------+
|col_3|col_4|col_5|col_6|col_1|col_2|           id|
+-----+-----+-----+-----+-----+-----+-------------+
|    3|    4|    5|    6|   12|   23|  34359738368|
|   56|   67|   89|   20|   12|   23|  34359738368|
|    0|    0|    0|    0|   12|   23|  34359738368|
|    3|    4|   56|    7|    1|    2|1434519076864|
|    3|    4|   56|    7|    1|    2|1434519076864|
|   12|   12|   12|   23|   12|    2|1554778161152|
+-----+-----+-----+-----+-----+-----+-------------+

答案 1 :(得分:1)

如果您确实需要从col1和col2生成唯一ID,则还可以利用Spark的sha2函数创建哈希值。

首先,我们使用以下命令生成一些虚拟数据:

from random import randint

max_range = 10
df1 = spark.createDataFrame(
            [(x, x * randint(1, max_range), x * 10 * randint(1, max_range)) for x in range(1, max_range)], 
            ['C1', 'C2', 'C3'])

>>> df1.show()
+---+---+---+
| C1| C2| C3|
+---+---+---+
|  1|  1| 60|
|  2| 14|180|
|  3| 21|270|
|  4| 16|360|
|  5| 35|250|
|  6| 30|480|
|  7| 28|210|
|  8| 80|320|
|  9| 45|360|
+---+---+---+

然后使用下面的代码从C2和C3列创建一个新的uid列:

from pyspark.sql.functions import col, sha2, concat

df1.withColumn("uid", sha2(concat(col("C2"), col("C3")), 256)).show(10, False)

输出:

+---+---+---+--------------------+
| C1| C2| C3|                 uid|
+---+---+---+--------------------+
|  1|  1| 60|a512db2741cd20693...|
|  2| 14|180|2f6543dc6c0e06e4a...|
|  3| 21|270|bd3c65ddde4c6f733...|
|  4| 16|360|c7a1e8c59fc9dcc21...|
|  5| 35|250|cba1aeb7a72d9ae27...|
|  6| 30|480|ad7352ff8927cf790...|
|  7| 28|210|ea7bc25aa7cd3503f...|
|  8| 80|320|02e1d953517339552...|
|  9| 45|360|b485cf8f710a65755...|
+---+---+---+--------------------+