胶水会创建记录的副本,该如何解决?

时间:2019-03-08 09:01:14

标签: amazon-redshift aws-glue glue

当前,我们使用Glue(Python脚本)将数据从MySQL数据库迁移到RedShift数据库。 昨天,我们发现了一个问题:有些记录是重复的,这些记录具有与MySQL数据库相同的主键。根据我们的要求,RedShift数据库中的所有数据应与MySQL数据库中的数据相同。

我试图在迁移之前删除RedShift表,但是没有找到用于此目的的方法...

您能帮我解决问题吗?

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

## @params: [TempDir, JOB_NAME]
args = getResolvedOptions(sys.argv, ['TempDir','JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)

datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "glue-db", table_name = "table", transformation_ctx = "datasource0")
applymapping0_1 = ApplyMapping.apply(frame = datasource0, mappings = [...], transformation_ctx = "applymapping0_1")
resolvechoice0_2 = ResolveChoice.apply(frame = applymapping0_1, choice = "make_cols", transformation_ctx = "resolvechoice0_2")
dropnullfields0_3 = DropNullFields.apply(frame = resolvechoice0_2, transformation_ctx = "dropnullfields0_3")
datasink0_4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields0_3, catalog_connection = "redshift-cluster", connection_options = {"dbtable": "table", "database": "database"}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink0_4")

我的解决方法是:

datasink0_4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields0_3, catalog_connection = "redshift-cluster", connection_options = {"dbtable": "mytable", "database": "mydatabase", "preactions": "delete from public.mytable;"}

2 个答案:

答案 0 :(得分:0)

Redshift并没有施加唯一的键约束

除非可以保证源脚本避免重复,否则您需要运行常规作业以在redshift上进行重复数据删除,

delete from yourtable
where id in
(
select id
from yourtable
group by 1
having count(*) >1
)
;

您是否认为DMS可以替代Glue?这可能对您更好。

答案 1 :(得分:0)

如果您的目标不是在目标表中重复,则可以对JBDC接收器使用postactions选项(有关更多详细信息,请参见this answer)。基本上,它允许使用登台表实现Redshift merge

对于您的情况,应该是这样(替换现有记录):

post_actions = (
         "DELETE FROM dest_table USING staging_table AS S WHERE dest_table.id = S.id;"
         "INSERT INTO dest_table (id,name) SELECT id,name FROM staging_table;"
         "DROP TABLE IF EXISTS staging_table"
    )
datasink0_4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = dropnullfields0_3, catalog_connection = "redshift-cluster", connection_options = {"dbtable": "staging_table", "database": "database", "overwrite" -> "true", "postactions" -> post_actions}, redshift_tmp_dir = args["TempDir"], transformation_ctx = "datasink0_4")