如何在pyspark中处理Glue数据目录中的空表

时间:2019-01-28 08:43:12

标签: python pyspark aws-glue

我想通过AWS Glue在SageMaker上执行SparkSQL,但没有成功。

我想做的是 parameterizing 胶水作业,因此我希望访问空表是可以接受的。但是,当方法glueContext.create_dynamic_frame.from_catalog提供一个空表时,会引发错误。

以下是引发错误的代码:

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job

glueContext = GlueContext(SparkContext.getOrCreate())

df1 = glueContext.create_dynamic_frame.from_catalog(
    database = "<glue's database name>",
    table_name = "<glue's table name>",  # I want here to be parameterized
    transformation_ctx = "df1"
)
df1 = df1.toDF()  # Here raises an Error
df1.createOrReplaceTempView('tmp_table')
df_sql = spark.sql("""SELECT ...""")

这是错误:

Unable to infer schema for Parquet. It must be specified manually.

是否可以将空表用作DynamicFrame的输入?预先谢谢你。

1 个答案:

答案 0 :(得分:-1)

df1 = df1.toDF()  # Here raises an Error

将此行替换为:

dynamic_df = DynamicFrame.fromDF(df1, glueContext, 'sample_job')  # Load pyspark df to dynamic frame