AWS Glue作业以将列合并到时间戳

时间:2019-03-20 14:35:33

标签: pyspark etl aws-glue

我对使用AWS Glue和Spark非常陌生。我正在尝试运行ETL作业,因此我的数据当前被解析为三个单独的列(年,月和日),并且需要将这些列合并为datetime(或时间戳)格式。 Glue生成了一个基本脚本,我试图将其添加到该逻辑中,但收效甚微。

这是代码的相关部分:

timestampedDf = dropnullfields3.toDF()
timestampedDf = timestampedDf.withColumn("snap_timestamp", datetime.date(year=int(timestampedDf['year']),day=int(timestampedDf['day']),month=int(timestampedDf['month']))
timestamped4 = DynamicFrame.fromDF(timestampedDf, glueContext, "timestamped4")

记录器还给我以下错误:

语法错误:文件“ /tmp/g-8b0c4794d23f8afeb757fae2a20be7a4b9222fef-5379414877065320437/script_2019-03-20-14-12-14.py”,第40行tamped4 = DynamicFrame.fromDF(timestampedDf,gumContext,“ timestamped4”)语法错误语法无效

这是完整的代码供参考。

import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import datetime

## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])

sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "perseus-reporting-db", table_name = "charges_dev_perseus_reporting", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "perseus-reporting-db", table_name = "charges_dev_perseus_reporting", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("amount", "double", "amount", "double"), ("customerid", "string", "customerid", "string"), ("status", "string", "status", "string"), ("createdat", "string", "createdat", "string"), ("year", "string", "year", "string"), ("month", "string", "month", "string"), ("day", "string", "day", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("amount", "double", "amount", "double"), ("customerid", "string", "customerid", "string"), ("status", "string", "status", "string"), ("createdat", "string", "createdat", "string"), ("year", "string", "year", "string"), ("month", "string", "month", "string"), ("day", "string", "day", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_cols", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_cols", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")

timestampedDf = dropnullfields3.toDF()
timestampedDf = timestampedDf.withColumn("snap_timestamp", datetime.date(year=int(timestampedDf['year']),day=int(timestampedDf['day']),month=int(timestampedDf['month']))
timestamped4 = DynamicFrame.fromDF(timestampedDf, glueContext, "timestamped4")

## @type: DataSink
## @args: [catalog_connection = "s3-rds-conn-perseus", connection_options = {"dbtable": "charges_dev_perseus_reporting", "database": "reporting-db"}, transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_jdbc_conf(frame = timestamped4, catalog_connection = "s3-rds-conn-perseus", connection_options = {"dbtable": "charges_dev_perseus_reporting", "database": "reporting-db"}, transformation_ctx = "datasink4")
job.commit()

谢谢!

1 个答案:

答案 0 :(得分:0)

尝试将let datasource = DataSource("test", 8, 8, 224, 1024, .reLU) let layer = MPSCNNFullyConnected(device: device, weights: datasource) layer.offset = MPSOffset(x: 8/2, y: 8/2, z: 0) to_date() Spark functions一起使用

concat()