AWS胶水挂起&在ETL工作中耗费大量时间

时间:2018-06-12 08:40:36

标签: amazon-web-services apache-spark amazon-redshift etl aws-glue

我正在使用AWS Glue,我希望将记录从Oracle表(有8000万行)转储到Redshift。然而,差不多2小时,它仍处于悬挂状态和仍然没有写入Amazon S3&最终我不得不停止工作。

我的代码:

import sys
import boto3
import json
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.dynamicframe import DynamicFrame
from awsglue.job import Job

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

db_username = [removed]
db_password = [removed]
db_url = [removed]
table_name = [removed]
jdbc_driver_name = "oracle.jdbc.OracleDriver"
s3_output = [removed]

df = glueContext.read.format("jdbc").option("url", db_url).option("user", db_username).option("password", db_password).option("dbtable", table_name).option("driver", jdbc_driver_name).load()

df.printSchema()


datasource0 = DynamicFrame.fromDF(df, glueContext, "datasource0")
datasource0.schema()
datasource0.show()

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("correlation_id", "decimal", "correlation_id", "bigint"), ("machine_pin","varchar","machine_pin","varchar"),("messageguid","varchar","messageguid","varchar"), ("originating_domain_object_id", "decimal", "originating_domain_object_id", "bigint"), ("originating_message_type_id", "bigint", "originating_message_type_id", "bigint"), ("source_messageguid","varchar","source_messageguid","varchar"), ("timestamp_of_request","timestamp","timestamp_of_request","timestamp"),("token","varchar","token","varchar"),("id","decimal","id","bigint"),("file_attachment","decimal","file_attachment","bigint")], transformation_ctx = "applymapping1")
resolvechoice2 = ResolveChoice.apply(frame = applymapping1,choice = "make_cols", transformation_ctx = "resolvechoice2")
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")

datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields3, catalog_connection = "us01-isg-analytics", connection_options = {"dbtable": "analytics_team_data.message_details", "database": "jk_test"}, redshift_tmp_dir = "s3://aws-glue-scripts-823837687343-us-east-1/glue_op/", transformation_ctx = "datasink4")

当我使用Apache Spark时,将数据转储到Redshift只需不到1小时。性能优化需要哪些修改才能使Glue快速转储数据?

0 个答案:

没有答案