在spark sql中使用HiveContext会引发异常

时间:2017-03-07 12:30:12

标签: apache-spark-sql hivecontext spark-hive

我必须使用HiveContext而不是SQLContext,因为使用了一些只能通过HiveContext使用的窗口函数。我已将以下行添加到我的pom.xml中:

<dependency>
   <groupId>org.apache.spark</groupId>
   <artifactId>spark-hive_2.10</artifactId>
   <version>1.6.0</version>
</dependency>

我运行代码的机器上的spark版本也是1.6.0  但是,当我将代码提交给spark时,我得到以下异常:

Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Iface.get_all_functions()Lorg/apache/hadoop/hive/metastore/api/GetAllFunctionsResponse;

这是堆栈跟踪:

   at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllFunctions(HiveMetaStoreClient.java:2060)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:105)
    at com.sun.proxy.$Proxy27.getAllFunctions(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:1998)
    at com.sun.proxy.$Proxy27.getAllFunctions(Unknown Source)
    at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3179)
    at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:210)
    at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:197)
    at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:307)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:268)
    at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:243)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:512)
    at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:194)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:249)
    at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:329)
    at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:239)
    at org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:459)
    at org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:459)
    at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:458)
    at org.apache.spark.sql.hive.HiveContext$$anon$3.<init>(HiveContext.scala:475)
    at org.apache.spark.sql.hive.HiveContext.analyzer$lzycompute(HiveContext.scala:475)
    at org.apache.spark.sql.hive.HiveContext.analyzer(HiveContext.scala:474)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
    at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
    at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
    at org.apache.spark.sql.SQLContext.baseRelationToDataFrame(SQLContext.scala:442)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:146)
    at com.cloudera.sparkwordcount.FindServers.main(FindServers.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

有人有任何想法吗?

1 个答案:

答案 0 :(得分:0)

我通过替换并明确使用

解决了类似的问题
hive-metastore.jar in my spark-submit command:
--conf /path_to/hive-metastore.jar

在发生问题之前,我正在使用的jar不适合其他环境。

  

工作罐位于cloudera宗地目录中(取决于   在发行版上),例如:   /cloudera/opt/cloudera/parcels/SPARK2-2.2.0.cloudera2-1.cdh5.12.0.p0.232957/lib/spark2/jars/hive-metastore-1.1.0-cdh5.12.0.jar

干杯, Michał