java.lang.NoSuchMethodError:scala.reflect.api.JavaUniverse.runtimeMirror

时间:2017-04-20 07:25:54

标签: scala apache-spark elasticsearch-hadoop

java.lang.NoSuchMethodError: scala.reflect.api.JavaUniverse.runtimeMirror(Ljava/lang/ClassLoader;)Lscala/reflect/api/JavaMirrors$JavaMirror;
    at org.elasticsearch.spark.serialization.ReflectionUtils$.org$elasticsearch$spark$serialization$ReflectionUtils$$checkCaseClass(ReflectionUtils.scala:42)
    at org.elasticsearch.spark.serialization.ReflectionUtils$$anonfun$checkCaseClassCache$1.apply(ReflectionUtils.scala:84)

似乎scala版本不兼容,但我看到spark,spark 2.10和scala 2.11.8的文件还可以。

这是我的pom.xml,这只是对使用es-hadoop写入elasticsearch的spark的测试,我不知道如何解决这个异常。 `

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>cn.jhTian</groupId>
    <artifactId>sparkLink</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>
    <name>${project.artifactId}</name>
    <description>My wonderfull scala app</description>
    <inceptionYear>2015</inceptionYear>
    <licenses>
        <license>
            <name>My License</name>
            <url>http://....</url>
            <distribution>repo</distribution>
        </license>
    </licenses>

    <properties>
        <encoding>UTF-8</encoding>
        <scala.version>2.11.8</scala.version>
        <scala.compat.version>2.11</scala.compat.version>

    </properties>

    <repositories>
        <repository>
            <id>ainemo</id>
            <name>xylink</name>
            <url>http://10.170.209.180:8081/nexus/content/groups/public/</url>
        </repository>
    </repositories>

    <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.4</version><!-- 2.64 -->
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <!--<dependency>-->
            <!--<groupId>org.scala-lang</groupId>-->
            <!--<artifactId>scala-compiler</artifactId>-->
            <!--<version>${scala.version}</version>-->
        <!--</dependency>-->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
            <version>2.1.0</version>
        </dependency>
        <dependency>
            <groupId>com.google.protobuf</groupId>
            <artifactId>protobuf-java</artifactId>
            <version>3.1.0</version>
        </dependency>
        <dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch-hadoop</artifactId>
            <version>5.3.0 </version>
        </dependency>

        <!-- Test -->
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.10</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.specs2</groupId>
            <artifactId>specs2-core_${scala.compat.version}</artifactId>
            <version>2.4.16</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.scalatest</groupId>
            <artifactId>scalatest_${scala.compat.version}</artifactId>
            <version>2.2.4</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>'

这是我的代码

import org.apache.spark.{SparkConf, SparkContext}
import org.elasticsearch.spark._

/**
  * Created by jhTian on 2017/4/19.
  */
object EsWrite {
  def main(args: Array[String]) {
    val sparkConf = new SparkConf()
      .set("es.nodes", "1.1.1.1")
      .set("es.port", "9200")
      .set("es.index.auto.create", "true")
      .setAppName("es-spark-demo")
    val sc = new SparkContext(sparkConf)
    val job1 = Job("C开发工程师","http://job.c.com","c公司","10000")
    val job2 = Job("C++开发工程师","http://job.c++.com","c++公司","10000")
    val job3 = Job("C#开发工程师","http://job.c#.com","c#公司","10000")
    val job4 = Job("Java开发工程师","http://job.java.com","java公司","10000")
    val job5 = Job("Scala开发工程师","http://job.scala.com","java公司","10000")
//    val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
//    val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")
//    val rdd=sc.makeRDD(Seq(numbers,airports))
    val rdd=sc.makeRDD(Seq(job1,job2,job3,job4,job5))
    rdd.saveToEs("job/info")
    sc.stop()
  }

}
case class Job(jobName:String, jobUrl:String, companyName:String, salary:String)'

2 个答案:

答案 0 :(得分:1)

通常NoSuchMethodError表示调用者的编译版本与运行时在类路径上找到的版本不同(或者CP上有多个版本)。

在你的情况下,我猜测es-hadoop是针对不同版本的Scala构建的,我在一段时间内没有使用过maven,但我认为你需要获得一些有用的命令进入是mvn depdencyTree。使用输出查看构建的Scala es-hadoop版本,然后将项目配置为使用相同的Scala版本。

为了获得稳定/可重复的构建,我建议使用类似maven-enforcer-plugin的内容:

<plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-enforcer-plugin</artifactId>
                <version>1.4.1</version>
                <executions>
                    <execution>
                        <id>enforce</id>
                        <configuration>
                            <rules>
                                <dependencyConvergence />
                            </rules>
                        </configuration>
                        <goals>
     <goal>enforce</goal>
    </goals>
</execution>
</executions>
</plugin>

最初可能很烦人,但是一旦你对所有依赖项进行了排序,你就不应该再遇到这样的问题。

答案 1 :(得分:0)

像这样使用依赖

<dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch-spark-20_2.11</artifactId>
            <version>5.2.2</version>
        </dependency>

for spark 2.0 and scala 2.11