在spark worker类路径中自定义JAR的最佳方法

时间:2017-05-01 18:47:41

标签: java apache-spark dependencies etl sbt-assembly

我正在使用spark中的ETL管道,我发现推送版本是时间/带宽密集型的。我的发布脚本(伪代码):

sbt assembly
openstack object create spark target/scala-2.11/etl-$VERSION-super.jar
spark-submit \
    --class comapplications.WindowsETLElastic \
    --master spark://spark-submit.cloud \
    --deploy-mode cluster \
    --verbose \
    --conf "spark.executor.memory=16g" \
    "$JAR_URL"

哪个有效,但可能需要4分钟才能组装,需要一分钟才能完成。我的build.sbt:

name := "secmon_etl"

version := "1.2"

scalaVersion := "2.11.8"

exportJars := true

assemblyJarName in assembly := s"${name.value}-${version.value}-super.jar"

libraryDependencies ++= Seq (
  "org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
  "org.apache.spark" %% "spark-streaming" % "2.1.0" % "provided",
  "org.apache.spark" %% "spark-streaming-kafka-0-10" % "2.1.0",
  "io.spray" %%  "spray-json" % "1.3.3",
//  "commons-net" % "commons-net" % "3.5",
//  "org.apache.httpcomponents" % "httpclient" % "4.5.2",
  "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.3.1"
)

assemblyMergeStrategy in assembly <<= (assemblyMergeStrategy in assembly) {
  (old) => {
    case PathList("META-INF", xs @ _*) => MergeStrategy.discard
    case x => MergeStrategy.first
  }
}

问题似乎是elasticsearch-spark-20_2.11的庞大规模。它为我的uberjar增加了大约90MB。我很乐意将其转换为对火花主机的provided依赖关系,因此无需打包。问题是,最好的方法是什么?我应该只是手动复制罐子还是有一种简单的方法来指定依赖关系并让工具解决所有传递依赖关系?

1 个答案:

答案 0 :(得分:0)

我现在正在运行更快的火花作业。我跑了

sbt assemblyPackageDependency

生成了一个巨大的jar(110MB!),很容易放在spark工作目录'jars'文件夹中,所以现在我的一个Spark群集的Dockerfile看起来像这样:

FROM openjdk:8-jre

ENV SPARK_VERSION 2.1.0
ENV HADOOP_VERSION hadoop2.7
ENV SPARK_MASTER_OPTS="-Djava.net.preferIPv4Stack=true"

RUN apt-get update && apt-get install -y python

RUN curl -sSLO http://mirrors.ocf.berkeley.edu/apache/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz && tar xzfC /spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz /usr/share && rm /spark-$SPARK_VERSION-bin-$HADOOP_VERSION.tgz

# master or worker's webui port, 
EXPOSE 8080
# master's rest api port
EXPOSE 7077

ADD deps.jar /usr/share/spark-$SPARK_VERSION-bin-$HADOOP_VERSION/jars/

WORKDIR /usr/share/spark-$SPARK_VERSION-bin-$HADOOP_VERSION

部署该配置后,我更改了build.sbt,以便将kafka-streaming / elasticsearch-spark个jar和依赖项标记为provided

name := "secmon_etl"

version := "1.2"

scalaVersion := "2.11.8"

exportJars := true

assemblyJarName in assembly := s"${name.value}-${version.value}-super.jar"

libraryDependencies ++= Seq (
  "org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
  "org.apache.spark" %% "spark-streaming" % "2.1.0" % "provided",

  "org.apache.spark" %% "spark-streaming-kafka-0-10" % "2.1.0" % "provided",
  "io.spray" %%  "spray-json" % "1.3.3" % "provided",
  "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.3.1" % "provided"
)

assemblyMergeStrategy in assembly <<= (assemblyMergeStrategy in assembly) {
  (old) => {
    case PathList("META-INF", xs @ _*) => MergeStrategy.discard
    case x => MergeStrategy.first
  }
}

现在我的部署会在20秒内完成!