从Redshift读取数据并将其写入分区中的S3

时间:2019-07-15 22:43:07

标签: aws-glue

我正在尝试从Redshift读取表(〜200)(每24小时-频率可能高达每小时),并将其写入S3存储桶。在我的用例中,每个表都有一个不同的分区。

例如,交易表具有这种结构

TransactionID MerchantStore MerchantCity TransactionDate

然后我的S3文件夹如下所示:

Transaction
 - MerchantCity=NewYork
  - Year
   - Month
    - Date
 - MerchantCity=Seattle
  - Year
   - Month
    - Date
    ...

这意味着我用于交易表的分区是 MerchantCity,YEAR(TransactionDate),Month(TransactionDate),Day(TransactionDate)。

我尝试从Redshift读取表,然后将其转储到分区中的S3。这是该代码:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from joblib import Parallel, delayed
import multiprocessing

glueContext = GlueContext(SparkContext.getOrCreate())

# Created this list just to emulate the behavior of partition schemes with only one table in Redshift.

partition_keys = ['txn_type','amount','trans_date','acceptor_ref','location_schema','settlement_date','merchant_city','merchant_state','merchant_country','mcc','industry_code','tran_code','reason_code','plan_id','pin_txn','eci','prescore_amount','batch_date','src_file_name','load_time']

txn_table_df = glueContext.create_dynamic_frame_from_options (
    connection_type = 'redshift',
    connection_options = {"url": "jdbc:redshift://testredshiftcluster.**.us-east-1.redshift.amazonaws.com:5439/dev", "user": "**", "password": "**","dbtable": "loyalty.dailyclienttxn", "redshiftTmpDir": "s3://loyalty-poc-arm/tempDirectory/"}
    )

def read_and_write(partition_key):
    path = "s3://loyalty-poc-arm/allpartitionsWithouParallelRun4/" + partition_key
    glueContext.write_dynamic_frame_from_options(
        frame = txn_table_df,
        connection_type = "s3",    
        connection_options = {"path": path, "partitionKeys": [partition_key]},
        format = "parquet")

#Used joblib to parallel execute the for loop so that I can write in  parallel
results = Parallel(n_jobs=-1, prefer="threads")(delayed(read_and_write)(partition_key) for partition_key in partition_keys)

作业执行3小时后,作业突然失败。

我可以采取什么措施来加快这一过程? 这是我的AWS Glue作业配置:

Worker type: G.2X
No of workers: 149

1 个答案:

答案 0 :(得分:0)

我建议:

  • 使用UNLOAD命令将数据从Amazon Redshift存储到Amazon S3
  • 使用Amazon Athena CREATE TABLE AS将数据转换为存储在Amazon S3中的新分区表

请参阅:Converting to Columnar Formats - Amazon Athena

该示例显示了如何转换为分区的Parquet格式,但是该方法也可以用于其他格式。