Scala +如何在文件的Spark Dataframe列中进行占位符替换?

时间:2019-04-05 11:56:58

标签: scala apache-spark dataframe

MyPlaceHolder.json

[[" PHPHONENUMBER ", "(^|\\W)(\\+\\d{1,}\\s*\\(?\\d{1,}\\)?[\\s|\\-|\\d{1,}]{1,})($|\\W)"],    
  [" PHPHONENUMBER ", "(^|\\W)(\\(0[\\d\\s]{1,}\\)[\\s|\\-|\\d{1,}]{1,})($|\\W)"],[" PHPHONENUMBER ", "(^|\\W)(\\+\\d{1,}\\s*\\(?\\d{1,}\\)?[\\s|\\-|\\d{1,}]{1,})($|\\W)"],    
  [" PHPHONENUMBER ", "(^|\\W)(\\(0[\\d\\s]{1,}\\)[\\s|\\-|\\d{1,}]{1,})($|\\W)"],[" PHPHONENUMBER ", "(^|\\W)(\\+\\d{1,}\\s*\\(?\\d{1,}\\)?[\\s|\\-|\\d{1,}]{1,})($|\\W)"],    
  [" PHPHONENUMBER ", "(^|\\W)(\\(0[\\d\\s]{1,}\\)[\\s|\\-|\\d{1,}]{1,})($|\\W)"],[" PHPHONENUMBER ", "(^|\\W)(\\+\\d{1,}\\s*\\(?\\d{1,}\\)?[\\s|\\-|\\d{1,}]{1,})($|\\W)"],    
  [" PHPHONENUMBER ", "(^|\\W)(\\(0[\\d\\s]{1,}\\)[\\s|\\-|\\d{1,}]{1,})($|\\W)"]]

基本上,我需要阅读此文件,并用占位符替换DF列中的模式。

对于Ex:像这样的"(^|\\W)(\\+\\d{1,}\\s*\\(?\\d{1,}\\)?[\\s|\\-|\\d{1,}]{1,})($|\\W)" shold get replace with " PHPHONENUMBER "

我用python做了如下的事情。

replacement_patterns = get_config_object__(os.getcwd() + REPLACEMENT_PATTERN_FILE_PATH)


def placeholder_replacement(text, replacement_patterns):
    """
     This function replace the place holder with reference to replacement_patterns.

     Parameters
     ----------
     text : String
         Input string to the function.

     replacement_patterns : json
         json object of placeholder replacement_patterns pattern.

     Returns
     -------
     text : String
         Output string with replacement of placeholder.
     """

    for replacement, pattern in replacement_patterns:
        text = re.compile(pattern, re.IGNORECASE | re.UNICODE).sub(replacement, text)
    return text

def get_config_object__(config_file_path):
    """
     This function mainly load the configuration object in json form.

     Parameters
     ----------
     config_file_path : str
         Configuration path.

     Returns
     -------
     config_object : JSON object
         Configuration object.
     """

    config_file = open(config_file_path)
    config_object = json.load(config_file)
    config_file.close()
    return config_object

如何在数据框列中替换这种文件替换?

Note:: I can not change file, its cross used a placeholder.json.(I know it's not json but can't help it)

Its inside resource folder.

这是我正在尝试的东西,但这只是实验。请随时提出一些建议。 没有任何效果,我尝试了不同的方法,但是当我刚接触该语言时,我需要帮助。

    val inputPath = getClass.getResource("/input_data/placeholder_replacement.txt").getPath

    val inputDF = spark.read.option("delimiter", "|").option("header", true).option("ignoreLeadingWhiteSpace", true).option("ignoreTrailingWhiteSpace", true).csv(inputPath)


    val replacement_pattern = getClass.getResource("/unitmetrics-replacement-patterns.json").getPath

    val replacement_pattern_DF = (spark.read.text(replacement_pattern))


    val myval = replacement_pattern_DF.rdd.map(row => row.getString(0).split("],").toList).collect()


    val removeNonGermanLetterFunction = udf((col: String) => {



      myval.foreach { x =>

        x.foreach { x =>

          var key = x.split("\",")(0).replaceAll("[^0-9a-zA-ZäöüßÄÖÜẞ _]", "")
          var value = x.split("\",")(1).replaceAll("\"", "")

          val regex = value.r

          regex.replaceAllIn(col, key)


        }
      }
    }
    )


    val input = inputDF.withColumn("new", removeNonGermanLetterFunction(col("duplicate_word_col")))

    input.show()

1 个答案:

答案 0 :(得分:1)

您应尽可能使用Spark DataFrame(也称为Spark SQL)API,而不是所显示的较低级别的RDD API(rdd.map()rdd.foreach() ...)。

这通常意味着将数据加载到DataFrame df内,然后使用df.withColumn()创建新列,并将转换应用于先前的列。最后,RDD仍在使用,但是通过使用高级DataFrame API为您进行了很多优化。

这里有一个Scala小应用程序,显示了如何使用Spark SQL函数regexp_replace将模式替换应用于DataFrame。

import org.apache.log4j.{Logger, Level}
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.Column

object Main {

  def main(args: Array[String]): Unit = {

    // Set logging level to avoid Spark log spam
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("akka").setLevel(Level.ERROR)

    // Build Spark SQL session (mine is version 2.3.2)
    val spark = SparkSession.builder
      .appName("scalaTest1")
      .master("local[*]")
      .getOrCreate()

    // Import required to use Spark SQL methods like toDF() and calling columns with '
    import spark.implicits._

    // Create some basic DataFrame
    val df1 = List(
      (1, "I got pattern1 and pattern2."),
      (2, "I don't have any."),
      (3, "Oh, that pattern1 I have too.")
    ).toDF("id", "sentence")

    df1.show(false)
    //+---+-----------------------------+
    //|id |sentence                     |
    //+---+-----------------------------+
    //|1  |I got pattern1 and pattern2. |
    //|2  |I don't have any.            |
    //|3  |Oh, that pattern1 I have too.|
    //+---+-----------------------------+

    // Create replacements map
    val replacements = Map(
      "pattern1" -> "replacement1",
      "pattern2" -> "replacement2",
      "I " -> "you "
    )

    // Import required to use functions on DataFrame columns such as regexp_replace()
    import org.apache.spark.sql.functions._

    // Create a new column with one of the replacements applied to "sentence" column
    val df2 = df1.withColumn(
      "new",
      regexp_replace('sentence, "pattern1", replacements("pattern1"))
    )

    df2.show(false)
    //+---+-----------------------------+---------------------------------+
    //|id |sentence                     |new                              |
    //+---+-----------------------------+---------------------------------+
    //|1  |I got pattern1 and pattern2. |I got replacement1 and pattern2. |
    //|2  |I don't have any.            |I don't have any.                |
    //|3  |Oh, that pattern1 I have too.|Oh, that replacement1 I have too.|
    //+---+-----------------------------+---------------------------------+

    // With the first two replacements applied to "sentence" column by nesting one inside the other
    val df3 = df1.withColumn(
      "new",
      regexp_replace(
        regexp_replace('sentence, "pattern2", replacements("pattern2")),
        "pattern1",
        replacements("pattern1")
      )
    )

    df3.show(false)
    //+---+-----------------------------+------------------------------------+
    //|id |sentence                     |new                                 |
    //+---+-----------------------------+------------------------------------+
    //|1  |I got pattern1 and pattern2. |I got replacement1 and replacement2.|
    //|2  |I don't have any.            |I don't have any.                   |
    //|3  |Oh, that pattern1 I have too.|Oh, that replacement1 I have too.   |
    //+---+-----------------------------+------------------------------------+

    // Same, but applying all replacements recursively with "foldLeft" instead of nesting every replacement
    val df4 = df1.withColumn(
      "new",
      replacements.foldLeft(df1("sentence")) {
        case (c: Column, (pattern: String, replacement: String)) => regexp_replace(c, pattern, replacement)
      }
    )
    df4.show(false)
    //+---+-----------------------------+--------------------------------------+
    //|id |sentence                     |new                                   |
    //+---+-----------------------------+--------------------------------------+
    //|1  |I got pattern1 and pattern2. |you got replacement1 and replacement2.|
    //|2  |I don't have any.            |you don't have any.                   |
    //|3  |Oh, that pattern1 I have too.|Oh, that replacement1 you have too.   |
    //+---+-----------------------------+--------------------------------------+

    // Select the columns you want to keep and rename if necessary
    val df5 = df4.select('id, 'new).withColumnRenamed("new", "sentence")
    df5.show(false)
    //+---+--------------------------------------+
    //|id |sentence                              |
    //+---+--------------------------------------+
    //|1  |you got replacement1 and replacement2.|
    //|2  |you don't have any.                   |
    //|3  |Oh, that replacement1 you have too.   |
    //+---+--------------------------------------+

  }

}

Scala中有许多库可以从JSON读取,在这里我将使用Spark SQL method spark.read.json(path)来不添加其他依赖,即使使用Spark读取可能被认为是过分的这么小的文件。

请注意,我使用的函数期望一种特定的文件格式,即每行一个有效的JSON对象,并且您应该能够将JSON的字段映射到数据框的列。

这是我创建的文件replacements.json的内容:

{"pattern":"pattern1" , "replacement": "replacement1"}
{"pattern":"pattern2" , "replacement": "replacement2"}
{"pattern":"I " , "replacement": "you "}

这是重写了这个小应用程序,可以从该文件中读取替换项,将替换项放入地图中,然后使用我在上一个结尾处显示的foldLeft方法将替换项应用于数据。

import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.{Column, SparkSession}

object Main2 {

  def main(args: Array[String]): Unit = {

    // Set logging level to avoid Spark log spam
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("akka").setLevel(Level.ERROR)

    // Build Spark SQL session (mine is version 2.3.2)
    val spark = SparkSession.builder
      .appName("scalaTest1")
      .master("local[*]")
      .getOrCreate()

    // Import required to use Spark SQL methods like toDF() and calling columns with '
    import spark.implicits._
    // Import required to use functions on DataFrame columns such as regexp_replace()
    import org.apache.spark.sql.functions._


    // Create some basic DataFrame
    val df1 = List(
      (1, "I got pattern1 and pattern2."),
      (2, "I don't have any."),
      (3, "Oh, that pattern1 I have too.")
    ).toDF("id", "sentence")
    df1.show(false)
    //+---+-----------------------------+
    //|id |sentence                     |
    //+---+-----------------------------+
    //|1  |I got pattern1 and pattern2. |
    //|2  |I don't have any.            |
    //|3  |Oh, that pattern1 I have too.|
    //+---+-----------------------------+

    // Read replacements json file into a DataFrame
    val replacements_path = "/path/to/your/replacements.json"
    val replacements_df = spark.read.json(replacements_path)
    replacements_df.show(false)
    //+--------+------------+
    //|pattern |replacement |
    //+--------+------------+
    //|pattern1|replacement1|
    //|pattern2|replacement2|
    //|I       |you         |
    //+--------+------------+

    // Turn DataFrame into a Map for ease of use in next step
    val replacements_map = replacements_df
      .collect() // Brings all the df data from all Spark executors to the Spark driver, use only if df is small!
      .map(row => (row.getAs[String]("pattern"), row.getAs[String]("replacement")))
      .toMap
    print(replacements_map)
    // Map(pattern1 -> replacement1, pattern2 -> replacement2, I  -> you )

    // Apply replacements recursively with "foldLeft"
    val df2 = df1.withColumn(
      "new",
      replacements_map.foldLeft(df1("sentence")) {
        case (c: Column, (pattern: String, replacement: String)) => regexp_replace(c, pattern, replacement)
      }
    )
    df2.show(false)
    //+---+-----------------------------+--------------------------------------+
    //|id |sentence                     |new                                   |
    //+---+-----------------------------+--------------------------------------+
    //|1  |I got pattern1 and pattern2. |you got replacement1 and replacement2.|
    //|2  |I don't have any.            |you don't have any.                   |
    //|3  |Oh, that pattern1 I have too.|Oh, that replacement1 you have too.   |
    //+---+-----------------------------+--------------------------------------+

    // Select the columns you want to keep and rename if necessary
    val df3 = df2.select('id, 'new).withColumnRenamed("new", "sentence")
    df3.show(false)
    //+---+--------------------------------------+
    //|id |sentence                              |
    //+---+--------------------------------------+
    //|1  |you got replacement1 and replacement2.|
    //|2  |you don't have any.                   |
    //|3  |Oh, that replacement1 you have too.   |
    //+---+--------------------------------------+

  }

}

在最终应用中,删除df.show()print()。 火花的“转变”是“懒惰的”。这意味着Spark只会将您要执行的操作堆叠到执行图(DAG)中,而无需执行。仅当您迫使他采取行动时(例如,当您使用df.show()df.save()在某处写入数据(称为“操作”)时,它才会分析DAG,对其进行优化并实际对数据执行转换。 因此,您应该避免在中间转换中使用诸如df.show()之类的操作。