如何使用AWS ECS Fargate而不是EMR / EC2运行apache spark作业?

时间:2019-05-09 17:34:07

标签: apache-spark amazon-ecs aws-fargate

AWS Elastic MapReduce具有很多功能,但是它有一些粗糙的边缘,我想避开一些我想在Apache Spark中进行的相当便宜的计算。具体来说,我想看看是否可以在AWS ECS / Fargate上运行一个(scala)spark应用程序。如果我只能将其与一个以客户端/本地模式运行的容器一起使用,那就更好了。

我首先发布了具有hadoop3(用于AWS STS支持)和kubernetes配置文件的Spark发行版:

# in apache/spark git repository under tag v2.4.0
./dev/make-distribution.sh --name hadoop3-kubernetes -Phadoop-3.1 -Pkubernetes -T4

然后从该发行版中构建通用的Spark docker映像:

docker build -t spark:2.4.0-hadoop3.1 -f kubernetes/dockerfiles/spark/Dockerfile .

然后在我的项目中,我在tope上构建了另一个docker映像,将sbt组装的uberjar复制到工作目录中,并将入口点设置为spark-submit shell脚本。

# Dockerfile
FROM spark:2.4.0-hadoop3.1
COPY target/scala-2.11/my-spark-assembly.jar .
ENTRYPOINT [ "/opt/spark/bin/spark-submit" ]

在我的本地计算机上,我可以通过在docker-compose命令规范中提供适当的参数来运行该应用程序:

# docker-compose.yml
...
   command:
     - --master
     - local[*]
     - --deploy-mode
     - client
     - my-spark-assembly.jar

不幸的是,在Fargate ECS中,我将以下堆栈跟踪写入了CloudWatch,这很快导致失败:

Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.spark.SparkConf$.<init>(SparkConf.scala:714)
at org.apache.spark.SparkConf$.<clinit>(SparkConf.scala)
at org.apache.spark.SparkConf$$anonfun$getOption$1.apply(SparkConf.scala:388)
at org.apache.spark.SparkConf$$anonfun$getOption$1.apply(SparkConf.scala:388)
at scala.Option.orElse(Option.scala:289)
at org.apache.spark.SparkConf.getOption(SparkConf.scala:388)
at org.apache.spark.SparkConf.get(SparkConf.scala:250)
at org.apache.spark.deploy.SparkHadoopUtil$.org$apache$spark$deploy$SparkHadoopUtil$$appendS3AndSparkHadoopConfigurations(SparkHadoopUtil.scala:463)
at org.apache.spark.deploy.SparkHadoopUtil$.newConfiguration(SparkHadoopUtil.scala:436)
at org.apache.spark.deploy.SparkSubmit$$anonfun$2.apply(SparkSubmit.scala:334)
at org.apache.spark.deploy.SparkSubmit$$anonfun$2.apply(SparkSubmit.scala:334)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:334)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: c0d66fa49434: c0d66fa49434: Name does not resolve
at java.net.InetAddress.getLocalHost(InetAddress.java:1506)
at org.apache.spark.util.Utils$.findLocalInetAddress(Utils.scala:946)
at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress$lzycompute(Utils.scala:939)
at org.apache.spark.util.Utils$.org$apache$spark$util$Utils$$localIpAddress(Utils.scala:939)
at org.apache.spark.util.Utils$$anonfun$localCanonicalHostName$1.apply(Utils.scala:996)
at org.apache.spark.util.Utils$$anonfun$localCanonicalHostName$1.apply(Utils.scala:996)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.localCanonicalHostName(Utils.scala:996)
at org.apache.spark.internal.config.package$.<init>(package.scala:296)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
... 18 more
Caused by: java.net.UnknownHostException: c0d66fa49434: Name does not resolve
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324)
at java.net.InetAddress.getLocalHost(InetAddress.java:1501)
... 27 more

有没有人通过类似的尝试获得成功?

0 个答案:

没有答案