assemblyMergeStrategy在编译时导致scala.MatchError

时间:2016-04-08 21:13:35

标签: scala sbt sbt-assembly

我是sbt / assembly的新手。我试图解决一些依赖性问题,似乎唯一的方法是通过自定义合并策略。但是,每当我尝试添加合并策略时,我在编译时会看到一个看似随机的MatchError:

[error] (*:assembly) scala.MatchError: org/apache/spark/streaming/kafka/KafkaUtilsPythonHelper$$anonfun$13.class (of class java.lang.String)

我在kafka库中显示此匹配错误,但如果我完全取出该库,则会在另一个库上出现MatchError。如果我取出所有库,我在自己的代码上得到一个MatchError。如果我拿出" assemblyMergeStrategy"这一切都不会发生。块。我显然遗漏了一些非常基本的东西,但对于我的生活,我无法找到它,我无法找到有这个问题的其他人。我已经尝试过较旧的mergeStrategy语法,但据我所知,从文档和SO中读取,这是现在编写它的正确方法。请帮忙?

这是我的项目/ assembly.sbt:

addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")

我的project.sbt文件:

name := "Clerk"

version := "1.0"

scalaVersion := "2.11.6"

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.6.1" % "provided",
  "org.apache.spark" %% "spark-sql" % "1.6.1" % "provided",
  "org.apache.spark" %% "spark-streaming" % "1.6.1" % "provided",
  "org.apache.kafka" %% "kafka" % "0.8.2.1",
  "ch.qos.logback" %  "logback-classic" % "1.1.7",
  "net.logstash.logback" % "logstash-logback-encoder" % "4.6",
  "com.typesafe.scala-logging" %% "scala-logging" % "3.1.0",
  "org.apache.spark" %% "spark-streaming-kafka" % "1.6.1",
  ("org.apache.spark" %% "spark-streaming-kafka" % "1.6.1").
    exclude("org.spark-project.spark", "unused")
)

assemblyMergeStrategy in assembly := {
  case PathList("org.slf4j", "impl", xs @ _*) => MergeStrategy.first
}

assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)

1 个答案:

答案 0 :(得分:4)

您错过了合并策略模式匹配的默认情况:

assemblyMergeStrategy in assembly := {
  case PathList("org.slf4j", "impl", xs @ _*) => MergeStrategy.first
  case x =>
    val oldStrategy = (assemblyMergeStrategy in assembly).value
   oldStrategy(x)
}