'火花上蜂巢' - 引起:java.lang.ClassNotFoundException:org.apache.hive.spark.counter.SparkCounters

时间:2018-04-25 13:07:24

标签: java scala apache-spark hadoop hive

我正在运行'Hive on Spark',hive v2.3.3和Spark v2.0.0在spark standalone模式下运行,没有纱线。我的hive表是外部指向S3。 我的hive-site将spark.submit.deployMode设置为client。 spark.master设置为spark:// actualmaster:7077并且在spark ui中我看到spark master拥有可用的资源。

直线上我从表中选择*;这有效。然后在直线上我从表中运行select count(*);我得到以下错误:

/usr/lib/apache-hive-2.3.3-bin/lib/hive-exec-2.3.3.jar包含所谓的缺失类,hive2以 nohup $ HIVE_HOME / bin / hive --service hiveserver2 --hiveconf hive.server2.thrift.port = 10000 --hiveconf hive.root.logger = INFO,console&>> $ HIVE_HOME / logs / hiveserver2.log&

以下错误来自查看sparkUI中的'job':

在mapTran.java上失败了stageid0:mapPartitionsToPair:40

java.lang.NoClassDefFoundError: Lorg/apache/hive/spark/counter/SparkCounters;
    at java.lang.Class.getDeclaredFields0(Native Method)
    at java.lang.Class.privateGetDeclaredFields(Class.java:2583)
    at java.lang.Class.getDeclaredField(Class.java:2068)
    at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1803)
    at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:79)
    at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:494)
    at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:482)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:482)
    at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:379)
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:669)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1875)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1744)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2032)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2277)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2201)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2059)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2277)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2201)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2059)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2277)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2201)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2059)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2277)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2201)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2059)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2277)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2201)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2059)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:426)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:71)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
    at org.apache.spark.scheduler.Task.run(Task.scala:85)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: org.apache.hive.spark.counter.SparkCounters
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 45 more

请注意,如果我在beeline中执行此操作(设置spark.master = local;),则count(*)可以正常工作。如果没有将spark.master设置为本地,我错过了什么才能使它工作?

1 个答案:

答案 0 :(得分:0)

请尝试在spark submit命令中给出hive jar路径并检查Bcoz同样的事情发生在我身上。因此,如果可行,那么请检查spark conf文件。您可能没有正确指向蜂巢罐。