将Glue ETL作业加载到雪花时出错

时间:2020-01-07 13:26:32

标签: python etl snowflake-cloud-data-platform aws-glue

我正在尝试使用胶水ETL将s3个存储区csv文件中的数据加载到雪花中。在ETL作业中编写python脚本,如下所示:

    import sys
    from awsglue.transforms import *
    from awsglue.utils import getResolvedOptions
    from pyspark.context import SparkContext
    from awsglue.context import GlueContext
    from awsglue.job import Job
    from py4j.java_gateway import java_import
    SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"

    ## @params: [JOB_NAME, URL, ACCOUNT, WAREHOUSE, DB, SCHEMA, USERNAME, PASSWORD]
    args = getResolvedOptions(sys.argv, ['JOB_NAME', 'URL', 'ACCOUNT', 'WAREHOUSE', 'DB', 'SCHEMA', 
    'USERNAME', 'PASSWORD'])
    sc = SparkContext()
    glueContext = GlueContext(sc)
    spark = glueContext.spark_session
    job = Job(glueContext)
    job.init(args['JOB_NAME'], args)
    java_import(spark._jvm, "net.snowflake.spark.snowflake")


    spark._jvm.net.snowflake.spark.snowflake.SnowflakeConnectorUtils.enablePushdownSession 
     (spark._jvm.org.apache.spark.sql.SparkSession.builder().getOrCreate())
     sfOptions = {
     "sfURL" : args['URL'],
     "sfAccount" : args['ACCOUNT'],
     "sfUser" : args['USERNAME'],
     "sfPassword" : args['PASSWORD'],
     "sfDatabase" : args['DB'],
     "sfSchema" : args['SCHEMA'],
     "sfWarehouse" : args['WAREHOUSE'],
      }

     dyf = glueContext.create_dynamic_frame.from_catalog(database = "salesforcedb", table_name = 
     "pr_summary_csv", transformation_ctx = "dyf")
     df=dyf.toDF()
     ##df.write.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("parallelism", 
     "8").option("dbtable", "abcdef").mode("overwrite").save()
     df.write.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("dbtable", "abcdef").save()
     job.commit()

引发的错误是:

调用o81.save时发生

错误。指定了错误的用户名或密码。

但是,如果我不转换为Spark数据框架,而是直接使用动态框架,则会出现如下错误:

AttributeError:“函数”对象没有属性“格式”

有人可以查看我的代码,并告诉我将动态框架转换为DF时我做错了什么吗?如果需要提供更多信息,请告诉我。

顺便说一句,我是雪花的新手,这是我通过AWS Glue加载数据的尝试。 ?

2 个答案:

答案 0 :(得分:0)

调用o81.save时发生

错误。用户名或密码错误 已指定。

错误消息指出用户或密码有错误。如果您确定用户名和密码正确,请确保Snowflake帐户名和URL也正确。

但是,如果我不转换为Spark数据框,请直接使用 动态框架,我得到这样的错误:

AttributeError:“函数”对象没有属性“格式”

Glue DynamicFrame的write方法与Spark DataFrame不同,因此通常不要使用相同的方法。请检查文档:

https://css-tricks.com/styling-a-select-like-its-2019/

似乎您需要将参数指定为connection_options:

write(connection_type, connection_options, format, format_options, accumulator_size)

connection_options = {"url": "jdbc-url/database", "user": "username", "password": "password","dbtable": "table-name", "redshiftTmpDir": "s3-tempdir-path"} 

即使使用DynamicFrame,您最终也可能会遇到不正确的用户名或密码错误。因此,我建议您集中精力修复凭据。

答案 1 :(得分:0)

这是经过测试的Glue代码(您可以复制粘贴,因为它仅更改表名),可用于设置Glue ETL。 您将必须添加JDBC和Spark jars。您可以使用以下链接进行设置: https://community.snowflake.com/s/article/How-To-Use-AWS-Glue-With-Snowflake


import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
from py4j.java_gateway import java_import
SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake";

## @params: [JOB_NAME, URL, ACCOUNT, WAREHOUSE, DB, SCHEMA, USERNAME, PASSWORD]
args = getResolvedOptions(sys.argv, ['JOB_NAME', 'URL', 'ACCOUNT', 'WAREHOUSE', 'DB', 'SCHEMA', 'USERNAME', 'PASSWORD'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)


## uj = sc._jvm.net.snowflake.spark.snowflake
spark._jvm.net.snowflake.spark.snowflake.SnowflakeConnectorUtils.enablePushdownSession(spark._jvm.org.apache.spark.sql.SparkSession.builder().getOrCreate())
sfOptions = {
"sfURL" : args['URL'],
"sfAccount" : args['ACCOUNT'],
"sfUser" : args['USERNAME'],
"sfPassword" : args['PASSWORD'],
"sfDatabase" : args['DB'],
"sfSchema" : args['SCHEMA'],
"sfWarehouse" : args['WAREHOUSE'],
}

## Read from a Snowflake table into a Spark Data Frame
df = spark.read.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("query", "Select * from <tablename>").load()
df.show()

## Perform any kind of transformations on your data and save as a new Data Frame: df1 = df.[Insert any filter, transformation, or other operation]
## Write the Data Frame contents back to Snowflake in a new table df1.write.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("dbtable", "[new_table_name]").mode("overwrite").save() job.commit()