爆炸多列SparkSQL

时间:2020-09-08 05:18:09

标签: apache-spark apache-spark-sql

如何在Spark中爆炸多个数组列?我有一个包含5个字符串化数组列的数据框,我想在所有5个列上爆炸。为了简单起见,显示了3列示例。

如果我有以下输入行:

col1                  col2                              col3
["b_val1","b_val2"]   ["at_val1","at_val2","at_val3"]   ["male","female"]

我想在所有3个数组列上爆炸,因此输出应类似于:

b_val1   at_val1   male
b_val1   at_val1   female
b_val2   at_val1   male
b_val2   at_val1   female
b_val1   at_val2   male
b_val1   at_val2   female
b_val2   at_val2   male
b_val2   at_val2   female
b_val1   at_val3   male
b_val1   at_val3   female
b_val2   at_val3   male
b_val2   at_val3   female

我尝试了以下操作:

SELECT
timestamp,
explode(from_json(brandList, 'array<string>')) AS brand,
explode(from_json(articleTypeList, 'array<string>')) AS articleTypeList,
explode(from_json(gender, 'array<string>')) AS gender,
explode(from_json(masterCategoryList, 'array<string>')) AS masterCategoryList,
explode(from_json(subCategoryList, 'array<string>')) AS subCategoryList,
isLandingPage,
...
from table

但这是不允许的,并且我得到以下错误-Exception in thread "main" org.apache.spark.sql.AnalysisException: Only one generator allowed per select clause but found 5: explode(jsontostructs(brandList)), explode(jsontostructs(articleTypeList)), explode(jsontostructs(gender)), explode(jsontostructs(masterCategoryList)), explode(jsontostructs(subCategoryList));

1 个答案:

答案 0 :(得分:1)

使用withColumn获取所需的输出。

让我们创建一个具有3列arraytype的示例数据框并执行爆炸操作:

import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._

val rdd=spark.sparkContext.makeRDD(List(Row(Array(1,2,3),Array("a","b","c"),Array("1a","1b","1c"))))

val schema=new StructType().add("arraycolumn1",ArrayType(IntegerType)).add("arraycolumn2",ArrayType(StringType)).add("arraycolumn3",ArrayType(StringType))

var df=spark.createDataFrame(rdd,schema)

df.show(5,false)

   +------------+------------+------------+                                        
   |arraycolumn1|arraycolumn2|arraycolumn3|
   +------------+------------+------------+
   |[1, 2, 3]   |[a, b, c]   |[1a, 1b, 1c]|
   +------------+------------+------------+

val explodedDF=df.withColumn("column1",explode('arraycolumn1)).withColumn("column2",explode('arraycolumn2)).withColumn("column3",explode('arraycolumn3))

  explodedDF.select('column1,'column2,'column3).show(5,false)

      +-------+-------+-------+
      |column1|column2|column3| 
      +-------+-------+-------+
      |1      |a      |1a     |
      |1      |a      |1b     |
      |1      |a      |1c     |
      |1      |b      |1a     |
      |1      |b      |1b     |
      +-------+-------+-------+
      only showing top 5 rows

 
// Let's do the above steps with less lines of code

  var exploded=df.columns.foldLeft(df)((df,column)=>df.withColumn(column,explode(col(column))))

 exploded.select(df.columns.map(col(_)):_*).show(false)


//using spark-sql

df.createOrReplaceTempView("arrayTable")

 spark.sql("""
  select column1,column2,column3 from arraytable
  LATERAL VIEW explode(arraycolumn1) as column1
  LATERAL VIEW explode(arraycolumn2) as column2
  LATERAL VIEW explode(arraycolumn3) as column3""").show