Spark:通过在两个数据帧上添加行索引/数字来合并2个数据帧

时间:2016-11-09 13:44:24

标签: apache-spark pyspark apache-spark-sql

问:在PySpark中是否有任何方法可以合并两个数据帧或将数据帧的列复制到另一个?

例如,我有两个Dataframe:

DF1              
C1                    C2                                                        
23397414             20875.7353   
5213970              20497.5582   
41323308             20935.7956   
123276113            18884.0477   
76456078             18389.9269 

借调数据框

DF2
C3                       C4
2008-02-04               262.00                 
2008-02-05               257.25                 
2008-02-06               262.75                 
2008-02-07               237.00                 
2008-02-08               231.00 

然后我想将DF2的C3添加到DF1,如下所示:

New DF              
    C1                    C2          C3                                              
    23397414             20875.7353   2008-02-04
    5213970              20497.5582   2008-02-05
    41323308             20935.7956   2008-02-06
    123276113            18884.0477   2008-02-07
    76456078             18389.9269   2008-02-08

我希望这个例子很清楚。

10 个答案:

答案 0 :(得分:13)

rownum +窗口功能,即解决方案1或zipWithIndex.map,即解决方案2在这种情况下应该有所帮助。

解决方案1:您可以使用窗口函数来获取此kind of

然后我建议你将rownumber作为额外的列名添加到Dataframe说df1。

  DF1              
    C1                    C2                 columnindex                                             
    23397414             20875.7353            1
    5213970              20497.5582            2
    41323308             20935.7956            3
    123276113            18884.0477            4
    76456078             18389.9269            5

第二个数据框

DF2
C3                       C4             columnindex
2008-02-04               262.00            1        
2008-02-05               257.25            2      
2008-02-06               262.75            3      
2008-02-07               237.00            4          
2008-02-08               231.00            5

现在..做df1和df2的内连接,这些都是......  你将获得低于产出

类似这样的事情

from pyspark.sql.window import Window
from pyspark.sql.functions import rowNumber

w = Window().orderBy()

df1 = ....  // as showed above df1

df2 = ....  // as shown above df2


df11 =  df1.withColumn("columnindex", rowNumber().over(w))
  df22 =  df2.withColumn("columnindex", rowNumber().over(w))

newDF = df11.join(df22, df11.columnindex == df22.columnindex, 'inner').drop(df22.columnindex)
newDF.show()



New DF              
    C1                    C2          C3                                              
    23397414             20875.7353   2008-02-04
    5213970              20497.5582   2008-02-05
    41323308             20935.7956   2008-02-06
    123276113            18884.0477   2008-02-07
    76456078             18389.9269   2008-02-08

解决方案2:另一种好方法(可能这是最好的:))在scala中,你可以转换为pyspark:

/**
* Add Column Index to dataframe 
*/
def addColumnIndex(df: DataFrame) = sqlContext.createDataFrame(
  // Add Column index
  df.rdd.zipWithIndex.map{case (row, columnindex) => Row.fromSeq(row.toSeq :+ columnindex)},
  // Create schema
  StructType(df.schema.fields :+ StructField("columnindex", LongType, false))
)

// Add index now...
val df1WithIndex = addColumnIndex(df1)
val df2WithIndex = addColumnIndex(df2)

 // Now time to join ...
val newone = df1WithIndex
  .join(df2WithIndex , Seq("columnindex"))
  .drop("columnindex")

答案 1 :(得分:7)

我想我会从@Ram Ghadiyaram分享上面的答案#2的python(pyspark)翻译:

from pyspark.sql.functions import col
def addColumnIndex(df): 
  # Create new column names
  oldColumns = df.schema.names
  newColumns = oldColumns + ["columnindex"]

  # Add Column index
  df_indexed = df.rdd.zipWithIndex().map(lambda (row, columnindex): \
                                         row + (columnindex,)).toDF()

  #Rename all the columns
  new_df = reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx], 
                  newColumns[idx]), xrange(len(oldColumns)), df_indexed)   
  return new_df

# Add index now...
df1WithIndex = addColumnIndex(df1)
df2WithIndex = addColumnIndex(df2)

#Now time to join ...
newone = df1WithIndex.join(df2WithIndex, col("columnindex"),
                           'inner').drop("columnindex")

答案 2 :(得分:3)

我提到了他的(@Jed)回答

from pyspark.sql.functions import col
def addColumnIndex(df): 
    # Get old columns names and add a column "columnindex"
    oldColumns = df.columns
    newColumns = oldColumns + ["columnindex"]

    # Add Column index
    df_indexed = df.rdd.zipWithIndex().map(lambda (row, columnindex): \
                                         row + (columnindex,)).toDF()
    #Rename all the columns
    oldColumns = df_indexed.columns  
    new_df = reduce(lambda data, idx:data.withColumnRenamed(oldColumns[idx], 
                  newColumns[idx]), xrange(len(oldColumns)), df_indexed)   
    return new_df

# Add index now...
df1WithIndex = addColumnIndex(df1)
df2WithIndex = addColumnIndex(df2)

#Now time to join ...
newone = df1WithIndex.join(df2WithIndex, col("columnindex"),
                           'inner').drop("columnindex")

答案 3 :(得分:1)

这是一个简单的例子,即使您已经解决了问题,也可以帮助您。

  //create First Dataframe
  val df1 = spark.sparkContext.parallelize(Seq(1,2,1)).toDF("lavel1")

  //create second Dataframe
  val df2 = spark.sparkContext.parallelize(Seq((1.0, 12.1), (12.1, 1.3), (1.1, 0.3))). toDF("f1", "f2")

  //Combine both dataframe
  val combinedRow = df1.rdd.zip(df2.rdd). map({
    //convert both dataframe to Seq and join them and return as a row
    case (df1Data, df2Data) => Row.fromSeq(df1Data.toSeq ++ df2Data.toSeq)
  })
//  create new Schema from both the dataframe's schema
  val combinedschema =  StructType(df1.schema.fields ++ df2.schema.fields)

//  Create a new dataframe from new row and new schema
  val finalDF = spark.sqlContext.createDataFrame(combinedRow, combinedschema)

  finalDF.show

答案 4 :(得分:1)

This answer为我解决了这个问题:

import pyspark.sql.functions as sparkf

# This will return a new DF with all the columns + id
res = df.withColumn('id', sparkf.monotonically_increasing_id())

贷记Arkadi T

答案 5 :(得分:0)

扩展Jed's answer,以回应Ajinkya的评论:

要获取相同的旧列名,您需要将“old_cols”替换为新命名的索引列的列列表。请参阅我在下面的功能的修改版本

def add_column_index(df):
    new_cols = df.schema.names + ['ix']
    ix_df = df.rdd.zipWithIndex().map(lambda (row, ix): row + (ix,)).toDF()
    tmp_cols = ix_df.schema.names
    return reduce(lambda data, idx: data.withColumnRenamed(tmp_cols[idx], new_cols[idx]), xrange(len(tmp_cols)), ix_df)

答案 6 :(得分:0)

for python3 version,

from pyspark.sql.types import StructType, StructField, LongType

def with_column_index(sdf): 
    new_schema = StructType(sdf.schema.fields + [StructField("ColumnIndex", LongType(), False),])
    return sdf.rdd.zipWithIndex().map(lambda row: row[0] + (row[1],)).toDF(schema=new_schema)

df1_ci = with_column_index(df1)
df2_ci = with_column_index(df2)
join_on_index = df1_ci.join(df2_ci, df1_ci.ColumnIndex == df2_ci.ColumnIndex, 'inner').drop("ColumnIndex")

答案 7 :(得分:0)

不是更好的性能选择。

df3=df1.crossJoin(df2).show(3)

答案 8 :(得分:0)

要合并来自两个不同数据框的列,您首先必须创建一个列索引,然后再将这两个数据框合并。确实,两个数据框类似于两个SQL表。要建立连接,您必须加入他们。

如果您不关心行的最终顺序,则可以使用monotonically_increasing_id()生成索引列。

使用以下代码,您可以检查monotonically_increasing_id在两个数据框中是否生成相同的索引列(至少十亿行),因此合并的数据框中不会有任何错误。

from pyspark.sql import SparkSession
import pyspark.sql.functions as F

sample_size = 1E9

sdf1 = spark.range(1, sample_size).select(F.col("id").alias("id1"))
sdf2 = spark.range(1, sample_size).select(F.col("id").alias("id2"))

sdf1 = sdf1.withColumn("idx", sf.monotonically_increasing_id())
sdf2 = sdf2.withColumn("idx", sf.monotonically_increasing_id())

sdf3 = sdf1.join(sdf2, 'idx', 'inner')
sdf3 = sdf3.withColumn("diff", F.col("id1")-F.col("id2")).select("diff")
sdf3.filter(F.col("diff") != 0 ).show()

答案 9 :(得分:-2)

您可以结合使用monotonically_increasing_id(保证总是增加)和row_number(保证总是给出相同的顺序)的组合。您不能单独使用row_number,因为它需要按某种顺序排序。因此,这里我们按monotonically_increasing_id排序。我正在使用Spark 2.3.1和Python 2.7.13。

from pandas import DataFrame
from pyspark.sql.functions import (
    monotonically_increasing_id,
    row_number)
from pyspark.sql import Window

DF1 = spark.createDataFrame(DataFrame({
    'C1': [23397414, 5213970, 41323308, 123276113, 76456078],
    'C2': [20875.7353, 20497.5582, 20935.7956, 18884.0477, 18389.9269]}))

DF2 = spark.createDataFrame(DataFrame({
'C3':['2008-02-04', '2008-02-05', '2008-02-06', '2008-02-07', '2008-02-08']}))

DF1_idx = (
    DF1
    .withColumn('id', monotonically_increasing_id())
    .withColumn('columnindex', row_number().over(Window.orderBy('id')))
    .select('columnindex', 'C1', 'C2'))

DF2_idx = (
    DF2
    .withColumn('id', monotonically_increasing_id())
    .withColumn('columnindex', row_number().over(Window.orderBy('id')))
    .select('columnindex', 'C3'))

DF_complete = (
    DF1_idx
    .join(
        other=DF2_idx,
        on=['columnindex'],
        how='inner')
    .select('C1', 'C2', 'C3'))

DF_complete.show()

+---------+----------+----------+
|       C1|        C2|        C3|
+---------+----------+----------+
| 23397414|20875.7353|2008-02-04|
|  5213970|20497.5582|2008-02-05|
| 41323308|20935.7956|2008-02-06|
|123276113|18884.0477|2008-02-07|
| 76456078|18389.9269|2008-02-08|
+---------+----------+----------+