如何比较PySpark数据帧中的记录

时间:2019-02-12 05:30:20

标签: python-3.x pyspark apache-spark-sql

我想比较2个数据帧,并希望根据以下3个条件提取记录。

  1. 如果记录匹配,则“ SAME”应出现在新的FLAG列中。
  2. 如果记录不匹配(如果来自df1(假设编号66)),则“ DF1”应位于FLAG列中。
  3. 如果记录不匹配,则来自df2(假设编号为77),则“ DF2”应位于FLAG列中。 在这里,整个RECORD需要考虑和验证。记录明智的比较。
    我还需要像这样使用PySpark代码检查数百万条记录。

df1:

No,Name,Sal,Address,Dept,Join_Date
11,Sam,1000,ind,IT,2/11/2019
22,Tom,2000,usa,HR,2/11/2019
33,Kom,3500,uk,IT,2/11/2019
44,Nom,4000,can,HR,2/11/2019
55,Vom,5000,mex,IT,2/11/2019
66,XYZ,5000,mex,IT,2/11/2019

df2:

No,Name,Sal,Address,Dept,Join_Date
11,Sam,1000,ind,IT,2/11/2019
22,Tom,2000,usa,HR,2/11/2019
33,Kom,3000,uk,IT,2/11/2019
44,Nom,4000,can,HR,2/11/2019
55,Xom,5000,mex,IT,2/11/2019
77,XYZ,5000,mex,IT,2/11/2019

预期输出:

No,Name,Sal,Address,Dept,Join_Date,FLAG
11,Sam,1000,ind,IT,2/11/2019,SAME
22,Tom,2000,usa,HR,2/11/2019,SAME
33,Kom,3500,uk,IT,2/11/2019,DF1
33,Kom,3000,uk,IT,2/11/2019,DF2
44,Nom,4000,can,HR,2/11/2019,SAME
55,Vom,5000,mex,IT,2/11/2019,DF1
55,Xom,5000,mex,IT,2/11/2019,DF2
66,XYZ,5000,mex,IT,2/11/2019,DF1
77,XYZ,5000,mex,IT,2/11/2019,DF2

我加载了如下所示的输入数据,但对如何进行操作一无所知。

df1 = pd.read_csv("D:\\inputs\\file1.csv")

df2 = pd.read_csv("D:\\inputs\\file2.csv")

感谢您的帮助。谢谢。

2 个答案:

答案 0 :(得分:2)

# Requisite packages to import
import sys
from pyspark.sql.functions import lit, count, col, when
from pyspark.sql.window import Window

# Create the two dataframes
df1 = sqlContext.createDataFrame([(11,'Sam',1000,'ind','IT','2/11/2019'),(22,'Tom',2000,'usa','HR','2/11/2019'),
                                 (33,'Kom',3500,'uk','IT','2/11/2019'),(44,'Nom',4000,'can','HR','2/11/2019'),
                                 (55,'Vom',5000,'mex','IT','2/11/2019'),(66,'XYZ',5000,'mex','IT','2/11/2019')],
                                 ['No','Name','Sal','Address','Dept','Join_Date']) 
df2 = sqlContext.createDataFrame([(11,'Sam',1000,'ind','IT','2/11/2019'),(22,'Tom',2000,'usa','HR','2/11/2019'),
                                  (33,'Kom',3000,'uk','IT','2/11/2019'),(44,'Nom',4000,'can','HR','2/11/2019'),
                                  (55,'Xom',5000,'mex','IT','2/11/2019'),(77,'XYZ',5000,'mex','IT','2/11/2019')],
                                  ['No','Name','Sal','Address','Dept','Join_Date']) 
df1 = df1.withColumn('FLAG',lit('DF1'))
df2 = df2.withColumn('FLAG',lit('DF2'))

# Concatenate the two DataFrames, to create one big dataframe.
df = df1.union(df2)

使用window函数检查同一行的计数是否大于1,如果确实如此,则将列FLAG标记为SAME,否则保持原样。最后,删除重复项。

my_window = Window.partitionBy('No','Name','Sal','Address','Dept','Join_Date').rowsBetween(-sys.maxsize, sys.maxsize)
df = df.withColumn('FLAG', when((count('*').over(my_window) > 1),'SAME').otherwise(col('FLAG'))).dropDuplicates()
df.show()
+---+----+----+-------+----+---------+----+
| No|Name| Sal|Address|Dept|Join_Date|FLAG|
+---+----+----+-------+----+---------+----+
| 33| Kom|3000|     uk|  IT|2/11/2019| DF2|
| 44| Nom|4000|    can|  HR|2/11/2019|SAME|
| 22| Tom|2000|    usa|  HR|2/11/2019|SAME|
| 77| XYZ|5000|    mex|  IT|2/11/2019| DF2|
| 55| Xom|5000|    mex|  IT|2/11/2019| DF2|
| 11| Sam|1000|    ind|  IT|2/11/2019|SAME|
| 66| XYZ|5000|    mex|  IT|2/11/2019| DF1|
| 55| Vom|5000|    mex|  IT|2/11/2019| DF1|
| 33| Kom|3500|     uk|  IT|2/11/2019| DF1|
+---+----+----+-------+----+---------+----+

答案 1 :(得分:1)

我认为您可以通过创建临时列来指示源和join来解决您的问题。然后,您只需要检查条件即可,即是否同时存在两个源,或者仅存在一个源,以及哪个源。

考虑以下代码:

from pyspark.sql.functions import *


df1= sqlContext.createDataFrame([(11,'Sam',1000,'ind','IT','2/11/2019'),\
(22,'Tom',2000,'usa','HR','2/11/2019'),(33,'Kom',3500,'uk','IT','2/11/2019'),\
(44,'Nom',4000,'can','HR','2/11/2019'),(55,'Vom',5000,'mex','IT','2/11/2019'),\
(66,'XYZ',5000,'mex','IT','2/11/2019')], \
["No","Name","Sal","Address","Dept","Join_Date"])

df2= sqlContext.createDataFrame([(11,'Sam',1000,'ind','IT','2/11/2019'),\
(22,'Tom',2000,'usa','HR','2/11/2019'),(33,'Kom',3000,'uk','IT','2/11/2019'),\
(44,'Nom',4000,'can','HR','2/11/2019'),(55,'Xom',5000,'mex','IT','2/11/2019'),\
(77,'XYZ',5000,'mex','IT','2/11/2019')], \
["No","Name","Sal","Address","Dept","Join_Date"])
#creation of your example dataframes

df1 = df1.withColumn("Source1", lit("DF1"))
df2 = df2.withColumn("Source2", lit("DF2"))
#temporary columns to refer the origin later

df1.join(df2, ["No","Name","Sal","Address","Dept","Join_Date"],"full")\
#full join on all columns, but source is only set if record appears in original dataframe\
.withColumn("FLAG",when(col("Source1").isNotNull() & col("Source2").isNotNull(), "SAME")\
#condition if record appears in both dataframes\
.otherwise(when(col("Source1").isNotNull(), "DF1").otherwise("DF2")))\
#condition if record appears in one dataframe\
.drop("Source1","Source2").show() #remove temporary columns and show result

输出:

+---+----+----+-------+----+---------+----+
| No|Name| Sal|Address|Dept|Join_Date|FLAG|
+---+----+----+-------+----+---------+----+
| 33| Kom|3000|     uk|  IT|2/11/2019| DF2|
| 44| Nom|4000|    can|  HR|2/11/2019|SAME|
| 22| Tom|2000|    usa|  HR|2/11/2019|SAME|
| 77| XYZ|5000|    mex|  IT|2/11/2019| DF2|
| 55| Xom|5000|    mex|  IT|2/11/2019| DF2|
| 11| Sam|1000|    ind|  IT|2/11/2019|SAME|
| 66| XYZ|5000|    mex|  IT|2/11/2019| DF1|
| 55| Vom|5000|    mex|  IT|2/11/2019| DF1|
| 33| Kom|3500|     uk|  IT|2/11/2019| DF1|
+---+----+----+-------+----+---------+----+