我有这个XML,可以通过AWS Glue读取并插入到RDS中。下面是示例xml。
<VENDOR>
<DETAILS>
<RECORD>
<VENDOR_NUMBER>123456D</VENDOR_NUMBER>
<VENDOR_NAME>STORE 1</VENDOR_NAME>
</RECORD>
<RECORD>
<VENDOR_NUMBER>123456</VENDOR_NUMBER>
<VENDOR_NAME>STORE 2</VENDOR_NAME>
</RECORD>
<RECORD>
<VENDOR_NUMBER>123456C</VENDOR_NUMBER>
<VENDOR_NAME>STORE 3</VENDOR_NAME>
</RECORD>
</DETAILS>
<TRAILER>
<TOTAL_RECORD>00003</TOTAL_RECORD>
</TRAILER>
</VENDOR>
由于某种原因,从xml创建的动态框架内的列始终为struct类型。下面是打印方案的结果和示例代码
datasource = glueContext.create_dynamic_frame.from_catalog(database = "database", table_name = "table_name", transformation_ctx = "datasource")
datasource.printSchema()
root
|-- VENDOR_NAME: string (nullable = true)
|-- VENDOR_NUMBER: struct (nullable = true)
| |-- double: double (nullable = true)
| |-- int: integer (nullable = true)
| |-- string: string (nullable = true)
我试图添加一个解析选项以将数据转换为字符串,它适用于int类型,但不适用于double类型,因为原始数据为 123456D ,但是在某种程度上它变成了 123456.0 。下面是示例脚本和RDS结果。
resolvechoice = ResolveChoice.apply(frame = datasource, choice = "cast:string", transformation_ctx = "resolvechoice")
VENDOR_NUMBER VENDOR_NAME
123456.0 STORE 1
123456 STORE 2
123456G STORE 3
我还尝试过更新数据目录中表的模式,并将所有字段的数据类型更改为字符串,还选择了选项,以忽略“ glue爬网程序”配置选项中的模式更改,但是它没有工作。下面是来自搜寻器选项的
Configuration options
Schema updates in the data store Ignore the change and don't update the table in the data catalog.
Inherit schema from table Update all new and existing partitions with metadata from the table.
Object deletion in the data store Mark the table as deprecated in the data catalog.
是否有一种方法可以使胶水作业始终以字符串形式从xml读取数据?
答案 0 :(得分:0)
AWS论坛中的某人回答了我的问题。我将解决方案发布在这里,以防万一有人需要。
我使用spark-xml而不是DynamicFrame生成DataFrame。
df = spark.read.format('xml') \
.option("rowTag", "RECORD") \
.load("s3://bucket/glue/input-xml/")
df.printSchema()
df.show()
root
|-- VENDOR_NAME: string (nullable = true)
|-- VENDOR_NUMBER: string (nullable = true)
+-----------+-------------+
|VENDOR_NAME|VENDOR_NUMBER|
+-----------+-------------+
| STORE 1| 123456D|
| STORE 2| 123456|
| STORE 3| 123456C|
+-----------+-------------+
为此,您需要下载spark-xml JAR文件,将其上传到S3,然后将其添加到Glue作业的“从属jar路径”中。 https://mvnrepository.com/artifact/com.databricks/spark-xml_2.11/0.7.0 https://docs.aws.amazon.com/en_pv/glue/latest/dg/add-job.html