使用最后两列作为分区将Spark数据帧转换为pyspark中的配置单元分区创建表

时间:2019-07-01 22:05:45

标签: python-2.7 apache-spark-sql pyspark-sql

我在Pyspark(2.3)中有一个数据框,需要从中生成一个分区的create table语句,以通过spark.sql()运行以使其与hive兼容。

Sample Dataframe:
 final.printSchema()
root
 |-- name: string (nullable = true)
 |-- age: string (nullable = true)
 |-- value: long (nullable = true)
 |-- date: string (nullable = true)
 |-- subid: string( nullable=true)

脚本应读取数据框并创建下表,并将最后两列视为分区列。

`create table schema.final( name string ,age string  ,value long ) 
     partitioned by (date string , subid string) stored as parquet;`

上述pyspark解决方案的任何帮助都非常好

1 个答案:

答案 0 :(得分:1)

这里是一种通过遍历架构并生成Hive SQL的方法:

from pyspark.sql.types import StructType, StructField, StringType, LongType

schema = StructType([
  StructField('name', StringType()),
  StructField('age', StringType()),
  StructField('value', LongType()),
  StructField('date', StringType()),
  StructField('subid', StringType())
])

hiveCols = ""
hivePartitionCols = ""
for idx, c in enumerate(schema):
  # populate hive schema
  if(idx < len(schema[:-2])):
    hiveCols += "{0} {1}".format(c.name, c.dataType.simpleString())

    if(idx < len(schema[:-2]) - 1):
      hiveCols += ","


  # populate hive partition
  if(idx >= len(schema) - 2):
    hivePartitionCols += "{0} {1}".format(c.name, c.dataType.simpleString())

    if(idx < len(schema) - 1):
      hivePartitionCols += ","

hiveCreateSql = "create table schema.final({0}) partitioned by ({1}) stored as parquet".format(hiveCols, hivePartitionCols)
# create table schema.final(name string,age string,value bigint) partitioned by (date string,subid string) stored as parquet

spark.sql(hiveCreateSql)