如何提高我的spark-sql-joins的性能

时间:2019-01-03 19:17:25

标签: apache-spark apache-spark-sql

我有两个数据源(两个csv文件),一个是传入数据源(220万条记录),而主数据源(3500万条记录)。我的工作是验证传入数据源中有多少记录与主数据源匹配并输出它们。这里的关键是记录是嘈杂的,需要模糊字符串匹配而不是精确匹配。我的联接在处理小数据时效果很好,但是当我必须对大数据执行相同操作时,这将永远耗费时间。

FYI ..使用此代码,我花了大约1小时40分钟在8核计算机上对传入数据(1m条记录)与主数据(300万条记录)进行联接。

例如。 主数据源具有下面显示的3500万条记录之一

“ Markets,Inc。”,1 Bank Plz ,, IL,芝加哥,60670-0001,IL

传入数据具有记录之一

“ Markets Inc”,1家银行,芝加哥,IL,60670-0001,IL

下面是我的代码

def myFunc: (String => String) = {
      s =>
        if (s.length > 5) {
          s.substring(0, 5)
        } else s
    }
val myUDF = udf(myFunc)
var sourcedata = spark.sqlContext.read.option("header", "true").option("delimiter", "|")
  .csv("./src/main/resources/company_address_sample3000000.txt").na.fill("")
  .select(col("COMPANY_NAME").alias("NAME1"), concat(col("STREET_ADDR_1"),
    col("STREET_ADDR_2")).alias("ADDRESS1"), col("CITY").alias("CITY1"), col("STATE").alias("STATE1"),
    myUDF(col("ZIP")).alias("ZIP1"))
  .withColumn("Beginswith1", col("NAME1").substr(0, 1)).distinct()
  .repartition(col("Beginswith1"), col("NAME1"), col("ADDRESS1"), col("CITY1"), col("STATE1"), col("ZIP1"))
var incomingData = spark.sqlContext.read.option("header", "true").option("delimiter", "|")
  .csv("./src/main/resources/common_format_sample1000000.txt")
  .select("NAME", "ADDRESS", "CITY", "STATE", "ZIP")
  .withColumn("Beginswith", col("NAME").substr(0, 1)).distinct()
  .repartition(col("Beginswith"), col("NAME"), col("ADDRESS"), col("CITY"), col("STATE"), col("ZIP"))

def calculate_similarity(str: String, str1: String): Double = {
  val dist = new JaroWinkler()
  Try {
    dist.similarity(str, str1)
  } getOrElse (0.0)
}

def myFilterFunction(
                      nameInp: String, nameRef: String,
                      addInp: String, addRef: String,
                      cityInp: String, cityRef: String,
                      stateInp: String, stateRef: String,
                      zipInp: String, zipRef: String
                    ) = {
  stateInp == stateRef && cityInp == cityRef && calculate_similarity(nameInp, nameRef) > 0.8 && calculate_similarity(addInp, addRef) > 0.8
}

val udf1 = org.apache.spark.sql.functions.udf(myFilterFunction _)
val filter: Column = udf1(
  incomingData("NAME"), sourcedata("NAME1"),
  incomingData("ADDRESS"), sourcedata("ADDRESS1"),
  incomingData("CITY"), sourcedata("CITY1"),
  incomingData("STATE"), sourcedata("STATE1"),
  incomingData("ZIP"), sourcedata("ZIP1")
)

incomingData.join(sourcedata, incomingData("Beginswith") === sourcedata("Beginswith1") && filter, "left_semi")
  .write.csv("./src/main/resources/hihello3-0.8-1m3m.csv")

1 个答案:

答案 0 :(得分:0)

重新排列连接过滤器的顺序,将时间从1小时50分钟显着减少到90秒。 尽管从sql优化角度来看这不是解决方案,但鉴于我的数据,它可以达到我当前的目的。 我仍然很乐意从sql优化的角度来看是否有人提出了解决方案。

var sourcedata = spark.sqlContext.read.option("header", "true").option("delimiter", "|")
      .csv("./src/main/resources/company_address.txt").na.fill("")
      .select(col("COMPANY_NAME").alias("NAME1"), concat(col("STREET_ADDR_1"),
        col("STREET_ADDR_2")).alias("ADDRESS1"), col("CITY").alias("CITY1"), col("STATE").alias("STATE1"),
        col("ZIP").alias("ZIP1"))
      .withColumn("Beginswith1", col("NAME1").substr(0, 1))
      .repartition(col("Beginswith1"), col("NAME1"), col("ADDRESS1"), col("CITY1"), col("STATE1"), col("ZIP1"))

var incomingData_Select = spark.sqlContext.read.option("header", "true").option("delimiter", "|")
  .csv("./src/main/resources/common_format.txt")
  .select("NAME", "ADDRESS", "CITY", "STATE", "ZIP")
  .withColumn("Beginswith", col("NAME").substr(0, 1)).distinct()
  .repartition(col("Beginswith"), col("NAME"), col("ADDRESS"), col("CITY"), col("STATE"), col("ZIP"))

def calculate_similarity(str: String, str1: String, str2: String, str3: String): Boolean = {
  val dist = new JaroWinkler()
  Try {
    dist.similarity(str, str1) > 0.8 && dist.similarity(str2, str3) > 0.8
  } getOrElse (false)
}

def myFilterFunction(
                      nameInp: String, nameRef: String,
                      addInp: String, addRef: String
                    ) = {
  calculate_similarity(nameInp, nameRef, addInp, addRef)
}

val sim_udf = org.apache.spark.sql.functions.udf(myFilterFunction _)

val filter: Column = sim_udf(
  incomingData_Select("NAME"), sourcedata("NAME1"),
  incomingData_Select("ADDRESS"), sourcedata("ADDRESS1")
)

val matching_companies = incomingData_Select
  .join(sourcedata, incomingData_Select("STATE") === sourcedata("STATE1") && incomingData_Select("CITY") === sourcedata("CITY1") && incomingData_Select("Beginswith") === sourcedata("Beginswith1") && filter, "left_semi")