我正在尝试从群集上的Hive表通过Shark Java API进行查询(一个简单的选择)。
但是我收到此错误消息:
14/01/15 17:25:54 INFO cluster.ClusterTaskSetManager: Loss was due to java.lang.NoClassDefFoundError
java.lang.NoClassDefFoundError: Could not initialize class com.google.common.cache.CacheBuilder
at org.apache.hadoop.hdfs.DomainSocketFactory.<init>(DomainSocketFactory.java:46)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:456)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:105)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:93)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:83)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:29)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
at ....
后面跟着这个错误:
14/01/15 17:25:54 INFO cluster.ClusterTaskSetManager: Loss was due to java.lang.IncompatibleClassChangeError
java.lang.IncompatibleClassChangeError: class com.google.common.cache.CacheBuilder$3 has interface com.google.common.base.Ticker as super class
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at com.google.common.cache.CacheBuilder.<clinit>(CacheBuilder.java:207)
at org.apache.hadoop.hdfs.DomainSocketFactory.<init>(DomainSocketFactory.java:46)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:456)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:105)
at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:93)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:83)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:237)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:226)
似乎这是番石榴依赖性的一个问题,但我无法弄清楚问题是什么。
我使用的是Spark-0.8.0,Shark-0.8.0,Hive-0.9.0和Hadoop-4.5.0。
我的.pom文件中唯一需要Guava的依赖项是:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.9.3</artifactId>
<version>0.8.0-incubating</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.0.0-cdh4.5.0</version>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>0.9.0</version>
</dependency>
有谁知道如何解决这个问题?
感谢。
答案 0 :(得分:2)
您引用的所有三个依赖项都依赖于不同版本的Guava。
似乎Hadoop正在寻找在10.0中添加的Guava的CacheBuilder,但是来自Hive(r09)的版本必须是优先考虑的版本。
我的建议是使用Maven的dependency exclusions来阻止Maven从Hive导入Guava。您可能也希望将其从Hadoop中排除,以便您可以确保三者中最新的(14.0)是使用过的。