镶木地板列上的Spark MergeSchema

时间:2020-04-17 06:26:36

标签: scala azure apache-spark databricks

对于架构演变,可以在Spark中使用Mergeschema来制作Parquet文件格式,对此我进行了以下澄清

这是否仅支持Parquet文件格式或其他任何文件格式,例如csv,txt文件。

如果我在两者之间添加新的附加列,我知道Mergeschema会将列移到最后。

如果列顺序受到干扰,那么Mergeschema是在创建列时将其对齐以使其顺序正确,还是我们需要通过选择所有列来手动执行此操作。

评论更新: 例如,如果我有一个如下所示的架构并创建一个如下所示的表-spark.sql("CREATE TABLE emp USING DELTA LOCATION '****'") empid,empname,salary====> 001,ABC,10000,第二天,如果我得到的格式为empid,empage,empdept,empname,salary====> 001,30,XYZ,ABC,10000以下。

是否在empage, empdept之后添加新列-empid,empname,salary columns

1 个答案:

答案 0 :(得分:3)

问: 1. 这是否仅支持Parquet文件格式或其他任何文件格式,例如csv,txt文件。 2.如果列顺序受到干扰,那么Mergeschema是在创建列时将其对齐以使其顺序正确还是我们需要通过选择所有列来手动完成此操作


实木复合地板仅支持AFAIK合并模式,而csv,txt等其他格式则不支持。

Mergeschema(spark.sql.parquet.mergeSchema)会以正确的顺序对齐列,即使它们已经分布。

parquet schema-merging上的spark文档示例:

import spark.implicits._

// Create a simple DataFrame, store into a partition directory
val squaresDF = spark.sparkContext.makeRDD(1 to 5).map(i => (i, i * i)).toDF("value", "square")
squaresDF.write.parquet("data/test_table/key=1")

// Create another DataFrame in a new partition directory,
// adding a new column and dropping an existing column
val cubesDF = spark.sparkContext.makeRDD(6 to 10).map(i => (i, i * i * i)).toDF("value", "cube")
cubesDF.write.parquet("data/test_table/key=2")

// Read the partitioned table
val mergedDF = spark.read.option("mergeSchema", "true").parquet("data/test_table")
mergedDF.printSchema()

// The final schema consists of all 3 columns in the Parquet files together
// with the partitioning column appeared in the partition directory paths
// root
//  |-- value: int (nullable = true)
//  |-- square: int (nullable = true)
//  |-- cube: int (nullable = true)
//  |-- key: int (nullable = true)

更新:您在注释框中给出的真实示例...


Q:是否在之后添加新列-empage, empdept empid,empname,salary columns


答案:是 EMPAGE,EMPDEPT和EMPID,EMPNAME,SALARY相继添加了您的“天”列。

查看完整示例。

package examples

import org.apache.log4j.Level
import org.apache.spark.sql.SaveMode


object CSVDataSourceParquetSchemaMerge extends App {
  val logger = org.apache.log4j.Logger.getLogger("org")
  logger.setLevel(Level.WARN)

  import org.apache.spark.sql.SparkSession

  val spark = SparkSession.builder().appName("CSVParquetSchemaMerge")
    .master("local")
    .getOrCreate()


  import spark.implicits._

  val csvDataday1 = spark.sparkContext.parallelize(
    """
      |empid,empname,salary
      |001,ABC,10000
    """.stripMargin.lines.toList).toDS()
  val csvDataday2 = spark.sparkContext.parallelize(
    """
      |empid,empage,empdept,empname,salary
      |001,30,XYZ,ABC,10000
    """.stripMargin.lines.toList).toDS()

  val frame = spark.read.option("header", true).option("inferSchema", true).csv(csvDataday1)

  println("first day data ")
  frame.show
  frame.write.mode(SaveMode.Overwrite).parquet("data/test_table/day=1")
  frame.printSchema

  val frame1 = spark.read.option("header", true).option("inferSchema", true).csv(csvDataday2)
  frame1.write.mode(SaveMode.Overwrite).parquet("data/test_table/day=2")
  println("Second day data ")

  frame1.show(false)
  frame1.printSchema

  // Read the partitioned table
  val mergedDF = spark.read.option("mergeSchema", "true").parquet("data/test_table")
  println("Merged Schema")
  mergedDF.printSchema
  println("Merged Datarame where EMPAGE,EMPDEPT WERE ADDED AFER EMPID,EMPNAME,SALARY followed by your day column")
  mergedDF.show(false)


}


结果:

first day data 
+-----+-------+------+
|empid|empname|salary|
+-----+-------+------+
|    1|    ABC| 10000|
+-----+-------+------+

root
 |-- empid: integer (nullable = true)
 |-- empname: string (nullable = true)
 |-- salary: integer (nullable = true)

Second day data 
+-----+------+-------+-------+------+
|empid|empage|empdept|empname|salary|
+-----+------+-------+-------+------+
|1    |30    |XYZ    |ABC    |10000 |
+-----+------+-------+-------+------+

root
 |-- empid: integer (nullable = true)
 |-- empage: integer (nullable = true)
 |-- empdept: string (nullable = true)
 |-- empname: string (nullable = true)
 |-- salary: integer (nullable = true)

Merged Schema
root
 |-- empid: integer (nullable = true)
 |-- empname: string (nullable = true)
 |-- salary: integer (nullable = true)
 |-- empage: integer (nullable = true)
 |-- empdept: string (nullable = true)
 |-- day: integer (nullable = true)

Merged Datarame where EMPAGE,EMPDEPT WERE ADDED AFER EMPID,EMPNAME,SALARY followed by your day column
+-----+-------+------+------+-------+---+
|empid|empname|salary|empage|empdept|day|
+-----+-------+------+------+-------+---+
|1    |ABC    |10000 |30    |XYZ    |2  |
|1    |ABC    |10000 |null  |null   |1  |
+-----+-------+------+------+-------+---+

目录树:

enter image description here