elasticsearch服务器存在于版本为5.4.1的Linux服务器上。
使用的火花簇是spark-2.2.0-bin-hadoop2.7。
我将spark.jars.packages org.elasticsearch:elasticsearch-spark-20_2.11:5.4.1
添加到spark-defaults.conf中
启动主服务器和从服务器是成功的,可以通过localhost:8080访问spark webui。
以./start-master.sh
和./start-slave.sh spark://ApacheFlink:7077
我使用Intellij IDEA和sbt。 使用的scala版本是2.11.8
这是scala代码。
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.elasticsearch.spark._
object TestInput {
def main(args: Array[String]): Unit = {
println("Hello, world")
val conf = new SparkConf().setAppName("TestInput").setMaster("spark://ApacheFlink:7077")
conf.set("es.nodes","elasticserver")
conf.set("es.port","9200")
conf.set("es.index.auto.create", "true")
val sc = new SparkContext(conf)
val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
sc.makeRDD(Seq(numbers, airports)).saveToEs("test/TestInput")
}
}
我经常使用sbt依赖项。这些是我迄今为止的发现。
我的所有尝试都使用scalaVersion := 2.11.8
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0" % "provided"
[error] /home/foo/Desktop/AnotherOne/src/main/scala/TestInput.scala:4: object elasticsearch is not a member of package org
[error] import org.elasticsearch.spark._
[error] ^
[error] /home/foo/Desktop/AnotherOne/src/main/scala/TestInput.scala:20: value saveToEs is not a member of org.apache.spark.rdd.RDD[scala.collection.immutable.Map[String,Any]]
[error] sc.makeRDD(Seq(numbers, airports)).saveToEs("test/TestInput")
[error] ^
[error] two errors found
[error] (compile:compileIncremental) Compilation failed
第二次尝试:
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0" % "provided"
libraryDependencies += "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.4.1"
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:73)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:68)
at org.apache.spark.SparkConf.<init>(SparkConf.scala:55)
at TestInput$.main(TestInput.scala:11)
at TestInput.main(TestInput.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 5 more
第3次尝试;
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0"
libraryDependencies += "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.4.1"
17/08/09 13:43:09 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.1.111, executor 0): java.lang.ClassNotFoundException: org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
我甚至尝试使用其他错误消息导入elasticseach-hadoop。 我现在的问题非常简单。 我究竟做错了什么?我目前没有更多的想法。 我的火花群是否遗漏了什么?
答案 0 :(得分:0)
你可能在作为elasticsearch-spark模块的依赖关系的spark jar与build.sbt之间存在冲突。
我在build.sbt中以这种方式工作:
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.1.1" % "compile",
"org.apache.spark" %% "spark-sql" % "2.1.1" % "compile",
"org.apache.spark" %% "spark-mllib" % "2.1.1" % "compile",
"org.elasticsearch" %% "elasticsearch-spark-20" % "5.0.2" excludeAll ExclusionRule(organization = "org.apache.spark")
)