AWS胶水/ pyspark-如何使用Glue以编程方式创建Athena表

时间:2019-05-31 10:59:34

标签: amazon-web-services amazon-s3 pyspark amazon-athena aws-glue

我正在AwsGlue中运行一个脚本,该脚本从s3加载数据,进行一些转换并将结果保存到S3。我正尝试在此例程中再增加一步。我想在雅典娜的现有数据库中创建一个新表。

我在AWS文档中找不到任何类似的示例。在我遇到的示例中,结果仅记录到S3中。 胶水有可能吗?

有一些代码示例。如何修改它以创建带有输出结果的雅典娜表?

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame

from pyspark.sql import SparkSession
from pyspark.context import SparkContext
from pyspark.sql.functions import *
from pyspark.sql import SQLContext
from pyspark.sql.types import *


args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)


datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "dataset", table_name = "table_1", transformation_ctx = "datasource0")
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("id", "long", "id", "long"), ("description", "string", "description", "string")], transformation_ctx = "applymapping1")
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = {"path": "s3://..."}, format = "parquet", transformation_ctx = "datasink4")


*create Athena table with the output results*

job.commit()

1 个答案:

答案 0 :(得分:0)

我可以想到两种方法来做到这一点。一种是使用sdk获取对athena API的引用,并使用它来执行带有create table语句as seen at this blog post

的查询

另一种可能更有趣的方法是使用Glue API为您的S3存储桶create a crawler,然后执行搜寻器。

使用第二种方法对表进行分类,不仅可以使用雅典娜,EMR的but also或Redshift频谱。