根据其他列比较PySpark数据框的某些列?

时间:2020-08-25 06:55:24

标签: pyspark feature-extraction

假设我有一个pyspark数据框(df1),其中包含一些用户的信息,如下所示:

+--------+--------+--------+--------+
|user_id |event_id|code    |City    |
+--------+--------+--------+--------+
|   user1| event1 | ABC    | LA     |
|   user1| event2 | ABC    | NYC    |
|   user2| event3 | DEF    | LA     |
|   user2| event4 | GHK    | LA     |
|   user3| event5 | DEF    | NYC    |
|   user3| event6 | DEF    | NYC    |
|   user3| event7 | ABC    | LA    |
+--------+--------+--------+--------+

在此数据框中,我们有重复的user_id,但event_id在整个数据集中是唯一的。另外,每个用户的代码和城市可以相同或不同。我也有另一个基于上表的pyspark数据框(df2),如下所示:

+----------+----------+------------+
|event_id1 |event_id2 | user_match |
+----------+----------+------------+
| event1   | event2   | Ture       |
| event1   | event4   | False      |
| event2   | event3   | False      |
| event2   | event7   | False      |
| event5   | event6   | True       |
| event6   | event1   | False      |
+----------+----------+------------+

如您所见,我没有所有组合。目标是通过这种方式根据用户的代码和城市提取特征(以检测用户):

+----------+----------+------------+--------+--------+
|event_id1 |event_id2 | user_match |code    |City    |
+----------+----------+------------+--------+--------+
| event1   | event2   | Ture       | Ture   | False  |
| event1   | event4   | False      | False  | Ture   |
| event2   | event3   | False      | False  | False  |
| event2   | event7   | False      | Ture   | False  |
| event5   | event6   | True       | Ture   | Ture   |
| event6   | event1   | False      | False  | False  |
+----------+----------+------------+--------+--------+

我在PySpark中使用Pandas实现了这一点。但是我不知道如何仅使用PySpark API编写它:

%spark2.pyspark

# select all or part of train pairs
num_train_samples = pdf2.shape[0]
feats_train_array = pdf2[0:num_train_samples]

# define a temp array
feats = np.zeros((num_train_samples, 1))

# list of feats
#
feats_titles = ["code", "City"]
                
# extract features
#
for ft in feats_titles:
    fvar = ft
    for i in range(num_train_samples):
    
        # read rows related to pairs
        info_pair0 = pdf1.loc[pdf1['eventId'] == pdf2[i][0]]
        info_pair1 = pdf1.loc[pdf1['eventId'] == pdf2[i][1]]
    
        # compare values
        feats_pair0 = (info_pair0[fvar].reset_index(drop=True)).iloc[0]
        feats_pair1 = (info_pair1[fvar].reset_index(drop=True)).iloc[0]
        if (feats_pair0==feats_pair1):
            feats[i] = 1
        else:
            feats[i] = 0
    feats_train_array = np.append(feats_train_array, feats, axis=1)

我认为这是使用PySpark API的简单代码,但我无法弄清楚。

1 个答案:

答案 0 :(得分:0)

嗯,我不知道这更简单,但是您可以做到这一点。

from pyspark.sql.functions import *

df1 = spark.read.option("header","true").option("inferSchema","true").csv("test1.csv")
df2 = spark.read.option("header","true").option("inferSchema","true").csv("test2.csv") \
  .withColumn('user_match', col('user_match').cast('boolean'))

df2.join(df1.withColumnRenamed('event_id', 'event_id1').drop('user_id').alias('a'), ['event_id1'], 'inner') \
   .join(df1.withColumnRenamed('event_id', 'event_id2').drop('user_id').alias('b'), ['event_id2'], 'inner') \
   .withColumn('code_match', when(expr('a.code = b.code'), True).otherwise(False)) \
   .withColumn('city_match', when(expr('a.City = b.City'), True).otherwise(False)) \
   .select(*df2.columns, 'code_match', 'city_match').show()

+---------+---------+----------+----------+----------+
|event_id1|event_id2|user_match|code_match|city_match|
+---------+---------+----------+----------+----------+
|   event1|   event2|      true|      true|     false|
|   event1|   event4|     false|     false|      true|
|   event2|   event3|     false|     false|     false|
|   event2|   event7|     false|      true|     false|
|   event5|   event6|      true|      true|      true|
|   event6|   event1|     false|     false|     false|
+---------+---------+----------+----------+----------+