在提交非VARCHAR之前,Redshift无法导入空字符串值(65535)

时间:2018-12-21 15:26:52

标签: postgresql amazon-redshift aws-glue

我遇到了一个奇怪的行为,即使用AWS Glue从Postgres将数据导入Redshift。我在postgres表lastname varchar(255)中有一个字段。 AWS Glue使用以下代码移动该表:

import sys, boto3
from pyspark.context import SparkContext
from awsglue.job import Job
from awsglue.utils import getResolvedOptions
from awsglue.context import GlueContext


def getDBUrl(database):
    dbConnection = glue.get_connection(Name=database)
    jdbc_url = dbConnection['Connection']['ConnectionProperties']['JDBC_CONNECTION_URL']
    username = dbConnection['Connection']['ConnectionProperties']['USERNAME']
    password = dbConnection['Connection']['ConnectionProperties']['PASSWORD']
    jdbc_url = jdbc_url + '?user=' + username + '&password=' + password
    print jdbc_url
    return jdbc_url


args = getResolvedOptions(sys.argv, ['TempDir', 'JOB_NAME'])

sc = sc if 'sc' in vars() else SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session

job = Job(glueContext)
job.init(args['JOB_NAME'], args)


source_database_connection = 'Postgres'
target_database_connection = 'Redshift'

bound_query = """
(
    select COALESCE(max(id),0)
    from {0}
) as temp
"""

glue = boto3.client(service_name='glue', region_name='us-east-1')

# Create connection urls
jdbc_url_source = getDBUrl(database=source_database_connection)
jdbc_url_target = getDBUrl(database=target_database_connection)


def extract_and_save(source_table, target_table, source_bound_query):
    print "loading {0}".format(target_table)
    (upper_bound,) = (spark.read
                      .jdbc(url=jdbc_url_source, table=source_bound_query)
                      .first())

    df = spark.read.jdbc(url=jdbc_url_source,
                         table=source_table,
                         column='id',
                         lowerBound=1,
                         upperBound=upper_bound + 10,
                         numPartitions=50)

    df.write.format("com.databricks.spark.redshift") \
        .option("url", jdbc_url_target) \
        .option("dbtable", target_table) \
        .option("tempdir", args["TempDir"]) \
        .option("aws_iam_role", "AWS_ROLE") \
        .mode("overwrite") \
        .option("jdbcdriver", "com.amazon.redshift.jdbc41.Driver") \
        .save()

source_user = """
(
SELECT 
    cast(firstname as VARCHAR(65535)),
    last_updated,
    registration_date,
    date_created,
    cast(sex as VARCHAR(65535)),
    id,
    cast(email as VARCHAR(65535)),
    cast(lastname as VARCHAR(65535)),
    cast(username as VARCHAR(65535))
FROM user
) as temp
"""


# do extract
extract_and_save(
    source_user,
    "user",
    bound_query.format("user"))

job.commit()

它运行完美。但是,一旦开始在varchar字段中使用的不是VARCHAR(65535),而是原始大小varchar(255),则在导入时出现错误:Missing data for not-null field。从STL_LOAD_ERROR中,我可以发现在lastname字段中得到了一个空字符串。但是在STL_LOAD_ERROR中,该值标记为@NULL@。在Redshift的表定义中,没有not null约束。

那么为什么varchar(255)会导致处理空字符串时出现问题,而varchar(65535)却以完美的方式做到了?

0 个答案:

没有答案