如何将elasticsearch-hadoop依赖项导入spark脚本

时间:2016-05-30 15:14:59

标签: elasticsearch apache-spark cassandra

我试图通过将Build.scala文档设置为:

来使用spark-es connector
libraryDependencies ++= Seq(
    "com.datastax.spark" %% "spark-cassandra-connector" % "1.2.1",
    "org.elasticsearch" %% "elasticsearch-hadoop" % "2.2.0"
  )

但是我得到了错误:

[error] (*:update) sbt.ResolveException: unresolved dependency: org.elasticsearch#elasticsearch-hadoop_2.10;2.2.0: not found

我可以看到它存在here ......

编辑:

当我将Build.scala更改为:

"org.elasticsearch" % "elasticsearch-hadoop" % "2.2.0"

我收到以下错误:

[error] impossible to get artifacts when data has not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3
java.lang.IllegalStateException: impossible to get artifacts when data has not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3

有什么问题?

1 个答案:

答案 0 :(得分:3)

elasticsearch-hadoop不是Scala依赖项,因此它没有Scala特定版本,因此无法与%%一起使用。试试

"org.elasticsearch" % "elasticsearch-hadoop" % "2.2.0"