我有一个基本的AWS Glue作业设置,该作业从具有多个文件夹的S3存储桶中读取:
S3://mybucket/table1
S3://mybucket/table2
S3://mybucket/table3
,依此类推。这些文件夹中的所有文件都具有完全相同的格式,我希望将它们插入同一数据库(table1,table2,table3)的不同redshift表中。从S3存储桶到S3存储桶似乎有一种自动执行此操作的方法,但是我似乎找不到有关如何将S3存储桶转移到Redshift的文档,这有可能吗?
我目前拥有的代码只是为此工作生成的基本Glue模板代码,partition_0包含文件夹名称的字符串表示形式:
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "test", table_name = "all_data_bucket", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("dataField1", "string", "dataField1", "string"), ("partition_0", "string", "partition_0", "string")], transformation_ctx = "applymapping1")
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_cols", transformation_ctx = "resolvechoice2")
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields3, catalog_connection = "REDSHIFT", connection_options = {"dbtable": "all_data_table", "database": "dev"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink4")
job.commit()
答案 0 :(得分:0)
1)将数据抓取为三个单独的表 2)使用boto3列出该数据库中的表 3)遍历列表并应用粘合代码将数据加载到redshift