如何在Spark中为AWS Glue创建的Dataframe上运行SQL SELECT?

时间:2019-05-21 03:30:46

标签: scala pyspark apache-spark-sql aws-glue

我在AWS Glue中有以下工作,该工作基本上是从一个表中读取数据并将其提取为S3中的csv文件,但是我想对该表(A Select,SUM和GROUPBY)运行查询并想要获取输出到CSV,我如何在AWS Glue中做到这一点?我是Spark的新手,请帮忙

args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = 
"db1", table_name = "dbo1_expdb_dbo_stg_plan", transformation_ctx = 
"datasource0")

applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = 
[("plan_code", "int", "plan_code", "int"), ("plan_id", "int", "plan_id", 
"int")], transformation_ctx = "applymapping1")

datasink2 = glueContext.write_dynamic_frame.from_options(frame = 
applymapping1, connection_type = "s3", connection_options = {"path": 
"s3://bucket"}, format = "csv", transformation_ctx = "datasink2")
job.commit()

2 个答案:

答案 0 :(得分:1)

粘合上下文的“ create_dynamic_frame.from_catalog”功能创建动态框架而不是数据框架。动态框架不支持执行SQL查询。

要执行sql查询,您首先需要将动态帧转换为数据帧,在spark的内存中注册一个临时表,然后在该临时表上执行sql查询。

示例代码:

from pyspark.context import SparkContext
from awsglue.context import GlueContext
from pyspark.sql import SQLContext

glueContext = GlueContext(SparkContext.getOrCreate())
spark_session = glueContext.spark_session
sqlContext = SQLContext(spark_session.sparkContext, spark_session)

DyF = glueContext.create_dynamic_frame.from_catalog(database="{{database}}", table_name="{{table_name}}")
df = DyF.toDF()
df.registerTempTable('{{name}}')
df = sqlContext.sql('{{your select query with table name that you used for temp table above}}
df.write.format('{{orc/parquet/whatever}}').partitionBy("{{columns}}").save('path to s3 location')

答案 1 :(得分:0)

这是我通过将胶粘动态框架首先转换为spark数据框架来做到的。然后使用gumContext对象和sql方法进行查询。

spark_dataframe = glue_dynamic_frame.toDF()
spark_dataframe.createOrReplaceTempView("spark_df")

glueContext.sql("""
SELECT * 
FROM spark_df
LIMIT 10
""").show()