如何使用SparkSQL根据列数据类型将数据帧分为多个数据帧?

时间:2019-09-24 16:43:20

标签: python-3.x apache-spark apache-spark-sql pyspark-sql

下面是示例数据框,我想根据其数据类型将其分为多个数据框或rdd

ID:Int
Name:String
Joining_Date: Date

我的数据框中有100列以上的列,是否有任何内置方法可以实现这种逻辑?

1 个答案:

答案 0 :(得分:0)

据我所知,没有内置功能可以实现此目的,不过,这是一种基于列类型将一个数据帧分为多个数据帧的方法。

首先让我们创建一些数据:

from pyspark.sql.functions import col
from pyspark.sql.types import StructType, StructField, StringType, LongType, DateType

df = spark.createDataFrame([
  (0, 11, "t1", "s1", "2019-10-01"), 
  (0, 22, "t2", "s2", "2019-02-11"), 
  (1, 23, "t3", "s3", "2018-01-10"), 
  (1, 24, "t4", "s4", "2019-10-01")], ["i1", "i2", "s1", "s2", "date"])

df = df.withColumn("date", col("date").cast("date"))

# df.printSchema()
# root
#  |-- i1: long (nullable = true)
#  |-- i2: long (nullable = true)
#  |-- s1: string (nullable = true)
#  |-- s2: string (nullable = true)
#  |-- date: date (nullable = true)

然后,我们将前一个数据帧的列分组为一个字典,其中的key为列类型,值为一个列表,其中包含与该类型对应的列:

d = {}
# group cols into a dict by type
for c in df.schema:
  key = c.dataType
  if not key in d.keys(): 
    d[key] = [c.name]
  else:
    d[key].append(c.name)

d
# {DateType: ['date'], StringType: ['s1', 's2'], LongType: ['i1', 'i2']}

然后我们遍历键(col类型),并为字典的每个项目生成模式以及相应的空数据框:

type_dfs = {}
# create schema for each type
for k in d.keys():
  schema = StructType(
    [
      StructField(cname , k) for cname in d[k] 
    ])

  # finally create an empty df with that schema  
  type_dfs[str(k)] = spark.createDataFrame(sc.emptyRDD(), schema)

type_dfs
# {'DateType': DataFrame[date: date],
#  'StringType': DataFrame[s1: string, s2: string],
#  'LongType': DataFrame[i1: bigint, i2: bigint]}

最后,我们可以通过访问type_dfs的每个项目来使用生成的数据帧:

type_dfs['StringType'].printSchema()

# root
#  |-- s1: string (nullable = true)
#  |-- s2: string (nullable = true)