选择结构火花数组

时间:2020-05-22 18:42:01

标签: java python scala apache-spark

这个问题是我正在处理的复杂问题的一部分,我被困在特定的位置。为了最小化问题陈述,假设我有一个从json创建的数据框。假设要最小化结构

原始数据有点像

{"person":[{"name":"david", "email": "david@gmail.com"}, {"name":"steve", "email":"steve@gmail.com"}]}

您可以将其另存为person.json并创建数据集

Dataset<Row> df =  spark.read().json("person.json")

模式/ printSchema()具有输出-

root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- name: string (nullable = true)
 |    |    |-- email: string (nullable = true)
 |    |


df.show(false);

+------------------------------------------------------------+
|       person                                               |
+------------------------------------------------------------+
|[[david, david@gmail.com],[steve, steve@gmail.com]]         |
+------------------------------------------------------------+

现在是问题所在。作为代码的一部分,我必须做

df.select(array(struct(person.name, reverse(person.email)))

它的输出类似于

+------------------------------------------------------------+
|       array(named_struct(person.name as `name`, person.e...|
+------------------------------------------------------------+
|[[[david, steve],[david@gmail.com, steve@gmail.com]]]       |
+------------------------------------------------------------+

模式获取已更新为-

root
 |-- array(named_struct(name, person.name as `name`, email, person.email as `email`)): array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |--  name: array(nullable=true)
 |    |    |-- element: string (containsNull = true)
 |    |--  email: array(nullable=true)
 |    |    |-- element: string (containsNull = true)

我不希望更改架构和数据。 df.select上方我应该做什么?

我正在使用Spark 2.3.0_2.11

根据用户Someshwar的建议 尝试在其上使用转换,但在较低版本中不可用

df = df.withColumn("person_processed", expr("transform(person, x -> named_struct( 'email', reverse(x.email), 'name', x.name))"));

下面是相同的堆栈跟踪-

Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException: 
extraneous input '>' expecting {'(', 'SELECT', 'FROM', 'ADD', 'AS', 'ALL', 'DISTINCT', 'WHERE', 'GROUP', 'BY', 'GROUPING', 'SETS', 'CUBE', 'ROLLUP', 'ORDER', 'HAVING', 'LIMIT', 'AT', 'OR', 'AND', 'IN', NOT, 'NO', 'EXISTS', 'BETWEEN', 'LIKE', RLIKE, 'IS', 'NULL', 'TRUE', 'FALSE', 'NULLS', 'ASC', 'DESC', 'FOR', 'INTERVAL', 'CASE', 'WHEN', 'THEN', 'ELSE', 'END', 'JOIN', 'CROSS', 'OUTER', 'INNER', 'LEFT', 'SEMI', 'RIGHT', 'FULL', 'NATURAL', 'ON', 'LATERAL', 'WINDOW', 'OVER', 'PARTITION', 'RANGE', 'ROWS', 'UNBOUNDED', 'PRECEDING', 'FOLLOWING', 'CURRENT', 'FIRST', 'AFTER', 'LAST', 'ROW', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'DIRECTORY', 'VIEW', 'REPLACE', 'INSERT', 'DELETE', 'INTO', 'DESCRIBE', 'EXPLAIN', 'FORMAT', 'LOGICAL', 'CODEGEN', 'COST', 'CAST', 'SHOW', 'TABLES', 'COLUMNS', 'COLUMN', 'USE', 'PARTITIONS', 'FUNCTIONS', 'DROP', 'UNION', 'EXCEPT', 'MINUS', 'INTERSECT', 'TO', 'TABLESAMPLE', 'STRATIFY', 'ALTER', 'RENAME', 'ARRAY', 'MAP', 'STRUCT', 'COMMENT', 'SET', 'RESET', 'DATA', 'START', 'TRANSACTION', 'COMMIT', 'ROLLBACK', 'MACRO', 'IGNORE', 'BOTH', 'LEADING', 'TRAILING', 'IF', 'POSITION', '+', '-', '*', 'DIV', '~', 'PERCENT', 'BUCKET', 'OUT', 'OF', 'SORT', 'CLUSTER', 'DISTRIBUTE', 'OVERWRITE', 'TRANSFORM', 'REDUCE', 'SERDE', 'SERDEPROPERTIES', 'RECORDREADER', 'RECORDWRITER', 'DELIMITED', 'FIELDS', 'TERMINATED', 'COLLECTION', 'ITEMS', 'KEYS', 'ESCAPED', 'LINES', 'SEPARATED', 'FUNCTION', 'EXTENDED', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'LAZY', 'FORMATTED', 'GLOBAL', TEMPORARY, 'OPTIONS', 'UNSET', 'TBLPROPERTIES', 'DBPROPERTIES', 'BUCKETS', 'SKEWED', 'STORED', 'DIRECTORIES', 'LOCATION', 'EXCHANGE', 'ARCHIVE', 'UNARCHIVE', 'FILEFORMAT', 'TOUCH', 'COMPACT', 'CONCATENATE', 'CHANGE', 'CASCADE', 'RESTRICT', 'CLUSTERED', 'SORTED', 'PURGE', 'INPUTFORMAT', 'OUTPUTFORMAT', DATABASE, DATABASES, 'DFS', 'TRUNCATE', 'ANALYZE', 'COMPUTE', 'LIST', 'STATISTICS', 'PARTITIONED', 'EXTERNAL', 'DEFINED', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'REPAIR', 'RECOVER', 'EXPORT', 'IMPORT', 'LOAD', 'ROLE', 'ROLES', 'COMPACTIONS', 'PRINCIPALS', 'TRANSACTIONS', 'INDEX', 'INDEXES', 'LOCKS', 'OPTION', 'ANTI', 'LOCAL', 'INPATH', STRING, BIGINT_LITERAL, SMALLINT_LITERAL, TINYINT_LITERAL, INTEGER_VALUE, DECIMAL_VALUE, DOUBLE_LITERAL, BIGDECIMAL_LITERAL, IDENTIFIER, BACKQUOTED_IDENTIFIER}(line 1, pos 21)

== SQL ==
transform(person, x -> named_struct( 'email', reverse(x.email), 'name', x.name))
---------------------^^^

    at org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:239)
    at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:115)
    at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
    at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parseExpression(ParseDriver.scala:44)
    at org.apache.spark.sql.functions$.expr(functions.scala:1308)
    at org.apache.spark.sql.functions.expr(functions.scala)
    at com.mywork.jspark.JSparkMain1.main(JSparkMain1.java:43)

2 个答案:

答案 0 :(得分:1)

我尝试按照以下方法解决此问题-

  1. Load the data
  val spark = sqlContext.sparkSession
    val implicits = spark.implicits
    import implicits._
    val data =
      """
        |{"person":[{"name":"david", "email": "david@gmail.com"}, {"name":"steve", "email": "steve@gmail.com"}]}
      """.stripMargin
    val df = spark.read
      .json(data.split(System.lineSeparator()).toSeq.toDS())
    df.show(false)
    df.printSchema()

结果-

+----------------------------------------------------+
|person                                              |
+----------------------------------------------------+
|[[david@gmail.com, david], [steve@gmail.com, steve]]|
+----------------------------------------------------+

root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)
  1. Process the array<struct>

    这已针对spark-2.4进行了测试

 val answer1 = df.withColumn("person_processed",
      expr("transform(person, x -> named_struct( 'email', reverse(x.email), 'name', x.name))"))
    answer1.show(false)
    answer1.printSchema()

结果-

+----------------------------------------------------+----------------------------------------------------+
|person                                              |person_processed                                    |
+----------------------------------------------------+----------------------------------------------------+
|[[david@gmail.com, david], [steve@gmail.com, steve]]|[[moc.liamg@divad, david], [moc.liamg@evets, steve]]|
+----------------------------------------------------+----------------------------------------------------+

root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)
 |-- person_processed: array (nullable = true)
 |    |-- element: struct (containsNull = false)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)

请注意输入的“ person”和“ person_processed”列的类型相同

Edit-1(根据注释,带有案例类)

用户使用的是spark 2.3,无法使用地图和数组的所有高阶函数,以下解决方案适用于spark 2.3

 // spark < 2.3
    case class InfoData(name: String, email: String)
    val infoDataSchema =
    ArrayType(StructType(Array(StructField("name", StringType), StructField("email", StringType))))

    val reverseEmailUDF = udf((arr1: mutable.WrappedArray[String], arr2: mutable.WrappedArray[String]) => {
      if (arr1.length != arr2.length) null
      else arr1.zipWithIndex.map(t => InfoData(t._1, arr2(t._2).reverse))
    }, infoDataSchema)

    val spark2_3Processed = df
      .withColumn("person_processed",
          reverseEmailUDF(
            col("person.name").cast("array<string>"),
            col("person.email").cast("array<string>")
          )
      )

    spark2_3Processed.show(false)
    spark2_3Processed.printSchema()

输出-

+----------------------------------------------------+----------------------------------------------------+
|person                                              |person_processed                                    |
+----------------------------------------------------+----------------------------------------------------+
|[[david@gmail.com, david], [steve@gmail.com, steve]]|[[david, moc.liamg@divad], [steve, moc.liamg@evets]]|
+----------------------------------------------------+----------------------------------------------------+

root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)
 |-- person_processed: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- name: string (nullable = true)
 |    |    |-- email: string (nullable = true)

Edit-2(根据注释,不包含案例类)

用户使用的是Spark 2.3,在该版本上,所有用于map和array的高阶函数均不可用,并且很难创建案例类。以下解决方案适用于Spark 2.3

   val subSchema = df.schema("person").dataType

    val reverseEmailUDF_withoutCaseClass = //udf((nameArrayRow: Row, emailArrayRow: Row) => {
      udf((nameArray: mutable.WrappedArray[String], emailArray: mutable.WrappedArray[String]) => {
      if (nameArray.length != emailArray.length) null
      else nameArray.zipWithIndex.map(t => (t._1, emailArray(t._2).reverse))
    }, subSchema)

    val withoutCaseClasDF = df
      .withColumn("person_processed",
          reverseEmailUDF_withoutCaseClass(
            col("person.name").cast("array<string>"),
            col("person.email").cast("array<string>")
          )
      )

    withoutCaseClasDF.show(false)
    withoutCaseClasDF.printSchema()
    withoutCaseClasDF.select("person_processed.email").show(false)

输出-

+----------------------------------------------------+----------------------------------------------------+
|person                                              |person_processed                                    |
+----------------------------------------------------+----------------------------------------------------+
|[[david@gmail.com, david], [steve@gmail.com, steve]]|[[david, moc.liamg@divad], [steve, moc.liamg@evets]]|
+----------------------------------------------------+----------------------------------------------------+

root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)
 |-- person_processed: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)

+--------------+
|email         |
+--------------+
|[david, steve]|
+--------------+


答案 1 :(得分:0)

尝试下面的代码。

scala> df.show(false)
+----------------------------------------------------+
|person                                              |
+----------------------------------------------------+
|[[david@gmail.com, david], [steve@gmail.com, steve]]|
+----------------------------------------------------+

scala> df.printSchema
root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)


scala> val finalDF = df
.select(explode($"person").as("person"))
.groupBy(lit(1).as("id"))
.agg(
    collect_list(
        struct(
            reverse($"person.email").as("email"),
            $"person.name").as("person")
        ).as("person")
    )
.drop("id")

finalDF: org.apache.spark.sql.DataFrame = [person: array<struct<email:string,name:string>>]

scala> finalDF.show(false)
+----------------------------------------------------+
|person                                              |
+----------------------------------------------------+
|[[moc.liamg@divad, david], [moc.liamg@evets, steve]]|
+----------------------------------------------------+

scala> finalDF.printSchema
root
 |-- person: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- email: string (nullable = true)
 |    |    |-- name: string (nullable = true)

scala>

相关问题