AWS Glue仅写最新分区拼花

时间:2019-10-21 18:16:04

标签: amazon-web-services pyspark aws-glue

我有一个胶水数据库,其中有两个表,每个表的相同数据只是分区不同。我试图写一份每天晚上运行的作业,从一个表中读取数据,然后使用更新的分区写入新数据。我可以使用以下代码做到这一点:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.context import GlueContext
from awsglue.job import Job
from pyspark.sql.functions import lit

glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)

datasource0 = glueContext.create_dynamic_frame.from_catalog(
    database = "Database",
    table_name = "Table",
    transformation_ctx = "datasource0"
)

datasource0 = datasource0.toDF()

datasource0.write.partitionBy("Key1","Key2").parquet(OutputFilePath)

但这将占用并写入整个数据帧。我只想编写新分区,所以我在AWS网站上找到了以下代码段:

glue_context.write_dynamic_frame.from_options(
    frame = projectedEvents,
    connection_type = "s3",    
    connection_options = {"path": "$outpath", "partitionKeys": ["type"]},
    format = "parquet")

但是,这也只是重写了整个数据帧。我如何才能重写最新的分区?

2 个答案:

答案 0 :(得分:0)

也许看一下书签,它就像检查点机制一样,可以避免重新处理以前已经处理过的数据:https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html

答案 1 :(得分:0)

这可以通过push_down_predicate参数完成。数据最初按年,月,日和小时进行分区,所以我只减去了一天,然后按如下所示使用push_down_predicate:

timestamp = (datetime.datetime.now() - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
s1 = timestamp.split('-')

pdp = "partition_0 = " + s1[0] + " and partition_1 = " + s1[1] + " and partition_2 = " + s1[2]

datasource0 = glueContext.create_dynamic_frame.from_catalog(
    database = "mailfiles_standardized", 
    table_name = "firehoseoutput", 
    push_down_predicate = pdp
)

glueContext.write_dynamic_frame.from_options(
frame = datasource2,
connection_type = "s3",
connection_options = {
    "path": Bucket, 
    "partitionKeys": ["Key1","Key2"]
    },
format = "parquet")