尝试编译gensort.scala,获取:[error]无法在未加载数据时获取工件。 IvyNode = net.java.dev.jets3t #jets3t; 0.6.1

时间:2015-04-08 21:15:24

标签: scala sbt

scala和sbt的新手,不知道如何继续。我错过了更多依赖项吗?

重现的步骤:

  1. 将gensort.scala代码保存在〜/ spark-1.3.0 / project /
  2. 开始构建:my-server $〜/ spark-1.3.0 / project / sbt
  3. >运行
  4. gensort.scala: gensort source

    在〜/ spark-1.3.0 / project / build.sbt中构建定义文件:

            lazy val root = (project in file(".")).
          settings(
          name := "gensort",
          version := "1.0",
          scalaVersion := "2.11.6"
    )
    
    libraryDependencies ++= Seq(
         "org.apache.spark" % "spark-examples_2.10" % "1.1.1",
         "org.apache.spark" % "spark-core_2.11" % "1.3.0",
         "org.apache.spark" % "spark-streaming-mqtt_2.11" % "1.3.0",
         "org.apache.spark" % "spark-streaming_2.11" % "1.3.0",
         "org.apache.spark" % "spark-network-common_2.10" % "1.2.0",
         "org.apache.spark" % "spark-network-shuffle_2.10" % "1.3.0",
         "org.apache.hadoop" % "hadoop-core" % "1.2.1"
    )
    

    非常感谢有关如何前进的任何见解。谢谢!丹尼斯

1 个答案:

答案 0 :(得分:0)

你不应该混用2.10和2.11,它们不是二进制兼容的。您的libraryDependencies应如下所示:

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-examples" % "1.1.1",
  "org.apache.spark" %% "spark-core" % "1.3.0",
  "org.apache.spark" %% "spark-streaming-mqtt" % "1.3.0",
  "org.apache.spark" %% "spark-streaming" % "1.3.0",
  "org.apache.spark" %% "spark-network-common" % "1.2.0",
  "org.apache.spark" %% "spark-network-shuffle" % "1.3.0",
  "org.apache.hadoop" % "hadoop-core" % "1.2.1"
)

%%表示Scala版本作为后缀添加到库ID中。在此更改后,我收到一个错误,因为无法找到依赖项。它位于:

resolvers += "poho" at "https://repo.eclipse.org/content/repositories/paho-releases"

尽管如此,spark-examples似乎不适用于2.11。将scalaVersion更改为

scalaVersion := "2.10.5"

解决了所有依赖问题,编译成功。