使用AWS Glue将CSV转换为ORC时如何排除分区?

时间:2018-09-13 18:06:43

标签: etl aws-glue orc

我在S3中有一堆CSV文件,我试图使用AWS Glue中的ETL作业将其隐藏到ORC。我有一个搜寻器,它搜寻包含CSV的目录并生成一个表。该表如下所示:

Column name | Data type | Partition key
---------------------------------------
field1      | string    |
field2      | string    |
field3      | string    |
partition_0 | string    | Partition (0)
partition_1 | string    | Partition (1)

接下来,我尝试将CS​​V转换为ORC文件。这是与我正在使用的类似的ETL脚本:

args = getResolvedOptions(sys.argv, ['JOB_NAME', 'database', 'table_name', 'output_dir'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
partition_predicate = '(partition_0 = "val1") AND (partition_1 = "val2")'
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = args['database'], table_name = args['table_name'], push_down_predicate = partition_predicate, transformation_ctx = "datasource0")
final = glueContext.write_dynamic_frame.from_options(frame = datasource0, connection_type = "s3", connection_options = { "path": args['output_dir'] }, format = "orc")
job.commit()

我还有另一个搜寻器,可搜寻包含ORC文件的输出目录。生成表时,看起来像这样:

Column name | Data type | Partition key
---------------------------------------
field1      | string    |
field2      | string    |
field3      | string    |
partition_0 | string    |
partition_1 | string    |
partition_0 | string    | Partition (0)
partition_1 | string    | Partition (1)

看起来它认为分区是ORC文件中的字段(它们不应该是)。如何修改我的脚本,以便CSV到ORC转换不将分区键包括为架构列?

1 个答案:

答案 0 :(得分:0)

如果您需要保留分区,请向编写器添加选项partitionKeys

final = glueContext.write_dynamic_frame.from_options(frame = datasource0, connection_type = "s3", connection_options = { "path": args['output_dir'], "partitionKeys" -> Seq("partition_0", "partition_1") }, format = "orc")

否则只需删除分区列:

cleanDyf = datasource0.dropFields(Seq("partition_0", "partition_1"))
final = glueContext.write_dynamic_frame.from_options(frame = cleanDyf, connection_type = "s3", connection_options = { "path": args['output_dir'] }, format = "orc")