如何为Scala Spark ETL设置本地开发环境以在AWS Glue中运行?

时间:2018-03-13 10:42:48

标签: scala pyspark sbt aws-glue

我希望能够在我的本地IDE中编写Scala,然后将其部署到AWS Glue,作为构建过程的一部分。但是我很难找到构建AWS生成的GlueApp骨架所需的库。

aws-java-sdk-glue不包含导入的类,我无法在其他地方找到这些库。虽然它们必须存在于某个地方,但它们可能只是该库的Java / Scala端口:aws-glue-libs

来自AWS的模板scala代码:

import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.MappingSpec
import com.amazonaws.services.glue.errors.CallSite
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import org.apache.spark.SparkContext
import scala.collection.JavaConverters._

object GlueApp {
  def main(sysArgs: Array[String]) {
    val spark: SparkContext = new SparkContext()
    val glueContext: GlueContext = new GlueContext(spark)
    // @params: [JOB_NAME]
    val args = GlueArgParser.getResolvedOptions(sysArgs, Seq("JOB_NAME").toArray)
    Job.init(args("JOB_NAME"), glueContext, args.asJava)
    // @type: DataSource
    // @args: [database = "raw-tickers-oregon", table_name = "spark_delivery_2_1", transformation_ctx = "datasource0"]
    // @return: datasource0
    // @inputs: []
    val datasource0 = glueContext.getCatalogSource(database = "raw-tickers-oregon", tableName = "spark_delivery_2_1", redshiftTmpDir = "", transformationContext = "datasource0").getDynamicFrame()
    // @type: ApplyMapping
    // @args: [mapping = [("exchangeid", "int", "exchangeid", "int"), ("data", "struct", "data", "struct")], transformation_ctx = "applymapping1"]
    // @return: applymapping1
    // @inputs: [frame = datasource0]
    val applymapping1 = datasource0.applyMapping(mappings = Seq(("exchangeid", "int", "exchangeid", "int"), ("data", "struct", "data", "struct")), caseSensitive = false, transformationContext = "applymapping1")
    // @type: DataSink
    // @args: [connection_type = "s3", connection_options = {"path": "s3://spark-ticker-oregon/target", "compression": "gzip"}, format = "json", transformation_ctx = "datasink2"]
    // @return: datasink2
    // @inputs: [frame = applymapping1]
    val datasink2 = glueContext.getSinkWithFormat(connectionType = "s3", options = JsonOptions("""{"path": "s3://spark-ticker-oregon/target", "compression": "gzip"}"""), transformationContext = "datasink2", format = "json").writeDynamicFrame(applymapping1)
    Job.commit()
  }
}

build.sbt我已经开始整合本地构建:

name := "aws-glue-scala"

version := "0.1"

scalaVersion := "2.11.12"

updateOptions := updateOptions.value.withCachedResolution(true)

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.1"

AWS Glue Scala API的文档似乎概述了AWS Glue Python库中提供的类似功能。那么也许所需要的是下载并构建PySpark AWS Glue库并将其添加到类路径中?也许可以从Glue python库uses Py4J开始。

4 个答案:

答案 0 :(得分:10)

@Frederic提供了一个非常有用的提示,以获取s3://aws-glue-jes-prod-us-east-1-assets/etl/jars/glue-assembly.jar的依赖关系。

不幸的是,glue-assembly.jar的版本已经过时,并在版本2.1中引发了火花。 如果您正在使用向后兼容的功能,那就没关系,但如果您依赖最新的spark版本(以及可能的最新粘合功能),您可以从/usr/share/aws/glue/etl/jars/glue-assembly.jar下的Glue dev-endpoint获取相应的jar。

如果你有一个名为my-dev-endpoint的dev-endpoint,你可以从中复制当前的jar:

export DEV_ENDPOINT_HOST=`aws glue get-dev-endpoint --endpoint-name my-dev-endpoint --query 'DevEndpoint.PublicAddress' --output text`

scp -i dev-endpoint-private-key \
glue@$DEV_ENDPOINT_HOST:/usr/share/aws/glue/etl/jars/glue-assembly.jar .

答案 1 :(得分:7)

不幸的是,Scala glue API没有可用的库。已经联系过亚马逊的支持,他们都知道这个问题。但是,他们没有提供任何用于交付API jar的ETA。

答案 2 :(得分:3)

作为一种解决方法,您可以从S3下载jar。 S3 URI为s3://aws-glue-jes-prod-us-east-1-assets/etl/jars/glue-assembly.jar

请参阅https://docs.aws.amazon.com/glue/latest/dg/dev-endpoint-tutorial-repl.html

答案 3 :(得分:2)