在pyspark数据框中将动态日期列转换为其他格式

时间:2019-06-17 10:43:04

标签: python pyspark

  1. 我有一个数据框

df = spark.createDataFrame([(1,2,3,{'dt_created':'2018-06-29T11:43:57.530Z','rand_col1':'val1'}),(4,5,6,{'rand_col2':'val2','rand_col3':'val3'}),(7,8,9,{'dt_uploaded':'2018-06-19T11:43:57.530Z','rand_col1':'val2'})]

  1. json列可能有也可能没有date列,并且日期键是动态的
  2. 我想检查json中的任何值是否与日期格式匹配,以及是否与之匹配的值想要将其转换为其他格式

1 个答案:

答案 0 :(得分:1)

使用UDF函数可以轻松解决

方法1

此代码尝试在JSON中查找日期,然后转换为新的日期时间(在我的示例中,我将其放在新列中)

import re
from datetime import datetime

import pyspark.sql.functions as f
from pyspark.shell import spark

@f.udf()
def parse(column: dict):
    for value in column.values():
        if re.match(r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z', value):
            return datetime \
                .strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') \
                .strftime('%Y-%m-%d')

    return None

df = spark.createDataFrame([(1, 2, 3, {'dt_created': '2018-06-29T11:43:57.530Z', 'rand_col1': 'val1'}),
                            (4, 5, 6, {'rand_col2': 'val2', 'rand_col3': 'val3'}),
                            (7, 8, 9, {'dt_uploaded': '2018-06-19T11:43:57.530Z', 'rand_col1': 'val2'})],
                           ['A', 'B', 'C', 'D'])

df = df.withColumn('parse_dt', parse(f.col('D')))
df.show()

输出:

+---+---+---+--------------------+----------+
|  A|  B|  C|                   D|  parse_dt|
+---+---+---+--------------------+----------+
|  1|  2|  3|[dt_created -> 20...|2018-06-29|
|  4|  5|  6|[rand_col2 -> val...|      null|
|  7|  8|  9|[dt_uploaded -> 2...|2018-06-19|
+---+---+---+--------------------+----------+

方法2

如果只想替换JSON中的日期:

import re
from datetime import datetime

import pyspark.sql.functions as f
from pyspark.shell import spark
from pyspark.sql.types import MapType, StringType


@f.udf(returnType=MapType(StringType(), StringType()))
def parse(column: dict):
    for key, value in column.items():
        if re.match(r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+Z', value):
            column[key] = datetime \
                .strptime(value, '%Y-%m-%dT%H:%M:%S.%fZ') \
                .strftime('%Y-%m-%d')

    return column


df = spark.createDataFrame([(1, 2, 3, {'dt_created': '2018-06-29T11:43:57.530Z', 'rand_col1': 'val1'}),
                            (4, 5, 6, {'rand_col2': 'val2', 'rand_col3': 'val3'}),
                            (7, 8, 9, {'dt_uploaded': '2018-06-19T11:43:57.530Z', 'rand_col1': 'val2'})],
                           ['A', 'B', 'C', 'D'])

df = df.withColumn('D', parse(f.col('D')))
df.show(truncate=False)

输出:

+---+---+---+----------------------------------------------+
|A  |B  |C  |D                                             |
+---+---+---+----------------------------------------------+
|1  |2  |3  |[dt_created -> 2018-06-29, rand_col1 -> val1] |
|4  |5  |6  |[rand_col2 -> val2, rand_col3 -> val3]        |
|7  |8  |9  |[dt_uploaded -> 2018-06-19, rand_col1 -> val2]|
+---+---+---+----------------------------------------------+