我有一个带有日期的数据框,并且希望过滤最近3天(不是基于当前时间,而是基于数据集中的最新时间)
opt
应该返回
+---+----------------------------------------------------------------------------------+----------+
|id |partition |date |
+---+----------------------------------------------------------------------------------+----------+
|1 |/raw/gsec/qradar/flows/dt=2019-12-01/hour=00/1585218406613_flows_20191201_00.jsonl|2019-12-01|
|2 |/raw/gsec/qradar/flows/dt=2019-11-30/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-30|
|3 |/raw/gsec/qradar/flows/dt=2019-11-29/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-29|
|4 |/raw/gsec/qradar/flows/dt=2019-11-28/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-28|
|5 |/raw/gsec/qradar/flows/dt=2019-11-27/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-27|
+---+----------------------------------------------------------------------------------+----------+
编辑:我已采用@Lamanus答案从分区字符串中提取日期
+---+----------------------------------------------------------------------------------+----------+
|id |partition |date |
+---+----------------------------------------------------------------------------------+----------+
|1 |/raw/gsec/qradar/flows/dt=2019-12-01/hour=00/1585218406613_flows_20191201_00.jsonl|2019-12-01|
|2 |/raw/gsec/qradar/flows/dt=2019-11-30/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-30|
|3 |/raw/gsec/qradar/flows/dt=2019-11-29/hour=00/1585218406613_flows_20191201_00.jsonl|2019-11-29|
+---+----------------------------------------------------------------------------------+----------+
答案 0 :(得分:1)
出于您的原始目的,我认为您不需要特定于日期的文件夹。由于文件夹结构已被dt
分区,因此请全部使用并进行过滤。
df = spark.createDataFrame([('1', '/raw/gsec/qradar/flows/dt=2019-12-01/hour=00/1585218406613_flows_20191201_00.jsonl')]).toDF('id', 'value')
from pyspark.sql.functions import *
dates = df.withColumn('date', regexp_extract('value', '[0-9]{4}-[0-9]{2}-[0-9]{2}', 0)) \
.withColumn('date', explode(sequence(to_date('date'), date_sub('date', 2)))) \
.select('date').rdd.map(lambda x: str(x[0])).collect()
path = df.withColumn('value', split('value', '/dt')[0]) \
.select('value').rdd.map(lambda x: str(x[0])).collect()
newDF = spark.read.json(path).filter(col(dt).isin(dates))
这是我的尝试。
df = spark.createDataFrame([('1', '/raw/gsec/qradar/flows/dt=2019-12-01/hour=00/1585218406613_flows_20191201_00.jsonl')]).toDF('id', 'value')
from pyspark.sql.functions import *
df.withColumn('date', regexp_extract('value', '[0-9]{4}-[0-9]{2}-[0-9]{2}', 0)) \
.withColumn('date', explode(sequence(to_date('date'), date_sub('date', 2)))) \
.withColumn('value', concat(lit('.*/'), col('date'), lit('/.*'))).show(10, False)
+---+----------------+----------+
|id |value |date |
+---+----------------+----------+
|1 |.*/2019-12-01/.*|2019-12-01|
|1 |.*/2019-11-30/.*|2019-11-30|
|1 |.*/2019-11-29/.*|2019-11-29|
+---+----------------+----------+