列的数据类型在粘合数据目录和getCatalogSource函数中有所不同

时间:2018-12-10 07:43:52

标签: amazon-web-services apache-spark aws-glue glue

我创建了一个粘合爬虫以读取apache访问日志。下面是表定义,该爬虫在Glue数据目录中创建了它。我能够从Athena获取以下DDL语句。

CREATE EXTERNAL TABLE crawler_access_log(
.. Other column names
timestamp string COMMENT 'from deserializer'
) ROW FORMAT SERDE 
'com.amazonaws.glue.serde.GrokSerDe' 
WITH SERDEPROPERTIES ( 
'input.format'='%{COMBINEDAPACHELOG}') 
STORED AS INPUTFORMAT 
'org.apache.hadoop.mapred.TextInputFormat' 
OUTPUTFORMAT 
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://some location'
TBLPROPERTIES (
'CrawlerSchemaDeserializerVersion'='1.0', 
'CrawlerSchemaSerializerVersion'='1.0', 
'UPDATED_BY_CRAWLER'='crawler_access_log', 
'averageRecordSize'='268', 
'classification'='combinedapache', 
'compressionType'='gzip', 
'grokPattern'='%{COMBINEDAPACHELOG}', 
'objectCount'='2', 
'recordCount'='71552', 
'sizeKey'='25268746', 
'typeOfData'='file')

//SAMPLE TIMESTAMP (Data type as string) COULMN DATA
 20/Jul/2018:03:27:44 +0000
 20/Jul/2018:03:27:44 +0000

但是当我通过glugContext从同一张表中读取数据时,timestamp列的数据类型变为date而不是string。我正在使用以下代码读取数据从桌子上。

val rawDynamicDataFrame = glueContext.getCatalogSource(database = "someDB", 
tableName = "crawler_access_log", redshiftTmpDir = "", transformationContext 
= "rawDynamicDataFrame").getDynamicFrame();

当我执行printSchema并查看动态数据帧的数据时,我看到timestamp列的数据类型为date而不是string。因此,数据被截断了。 / p>

scala> rawDynamicDataFrame.printSchema
root
|-- xx: string
|-- xx: string
|-- xx: string
|-- timestamp: date
|-- xx: string
|-- xx: string
|-- xx: string
scala> rawDynamicDataFrame.show(2)
2018-07-20  ///Original (20/Jul/2018:03:27:44 +0000)
2018-07-20  ///Original (20/Jul/2018:03:27:44 +0000)

即使胶水从胶水目录中读取数据,我也无法弄清楚为什么数据类型会发生变化。

0 个答案:

没有答案