Spark Hive报告java.lang.NoSuchMethodError:org.apache.hadoop.hive.metastore.api.Table.setTableName(Ljava / lang / String;)V

时间:2017-08-04 08:36:45

标签: apache-spark hive

我正在尝试使用SparkSession从Hive读取数据。

我的代码:

  

val warehouseLocation =“/ user / xx / warehouse”

     

val spark = SparkSession       .builder()      的.master( “本地[*]”)       .appName( “HiveReceiver”)      的.config( “spark.sql.warehouse.dir”,warehouseLocation)       .enableHiveSupport()       .getOrCreate()

     

导入spark.sql

     

sql(“select * from sparktest.test”)。show()

     

spark.stop()

我的版本:

  

火花:2.1.1

     

配置单元:1.2.1

     

的hadoop:2.7.1

但是在IDEA中运行时有一些例外:

  

线程“main”中的异常java.lang.NoSuchMethodError:   org.apache.hadoop.hive.metastore.api.Table.setTableName(Ljava /郎/字符串;)V     在   。org.apache.spark.sql.hive.MetastoreRelation(MetastoreRelation.scala:76)     在   org.apache.spark.sql.hive.HiveMetastoreCatalog.lookupRelation(HiveMetastoreCatalog.scala:142)     在   org.apache.spark.sql.hive.HiveSessionCatalog.lookupRelation(HiveSessionCatalog.scala:70)     在   org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $ .ORG $阿帕奇$火花$ SQL $ $催化剂分析$ $分析仪$$ ResolveRelations lookupTableFromCatalog(Analyzer.scala:457)     在   org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $$ anonfun $ $申请8.applyOrElse(Analyzer.scala:479)     在   org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $$ anonfun $ $申请8.applyOrElse(Analyzer.scala:464)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ resolveOperators $ 1.适用(LogicalPlan.scala:61)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ resolveOperators $ 1.适用(LogicalPlan.scala:61)     在   org.apache.spark.sql.catalyst.trees.CurrentOrigin $ .withOrigin(TreeNode.scala:70)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:60)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.适用(LogicalPlan.scala:58)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan $$ anonfun $ 1.适用(LogicalPlan.scala:58)     在   org.apache.spark.sql.catalyst.trees.TreeNode $$ anonfun $ 4.适用(TreeNode.scala:307)     在   org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188)     在   org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:305)     在   org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:58)     在   org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $。适用(Analyzer.scala:464)     在   org.apache.spark.sql.catalyst.analysis.Analyzer $ ResolveRelations $。适用(Analyzer.scala:454)     在   org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $执行$ 1 $$ anonfun $ $适用1.适用(RuleExecutor.scala:85)     在   org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $执行$ 1 $$ anonfun $ $适用1.适用(RuleExecutor.scala:82)     在   scala.collection.LinearSeqOptimized $ class.foldLeft(LinearSeqOptimized.scala:124)     在scala.collection.immutable.List.foldLeft(List.scala:84)at   org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $执行$ 1.适用(RuleExecutor.scala:82)     在   org.apache.spark.sql.catalyst.rules.RuleExecutor $$ anonfun $执行$ 1.适用(RuleExecutor.scala:74)     在scala.collection.immutable.List.foreach(List.scala:381)at   org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:74)     在   org.apache.spark.sql.execution.QueryExecution.analyzed $ lzycompute(QueryExecution.scala:69)     在   org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:67)     在   org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:50)     在org.apache.spark.sql.Dataset $ .ofRows(Dataset.scala:63)at at   org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)at at   com.bdp.steaming.HiveReceiver $ .main(HiveReceiver.scala:24)at at   com.bdp.steaming.HiveReceiver.main(HiveReceiver.scala)

有人可以知道这个错误在哪里?

1 个答案:

答案 0 :(得分:1)

我已经解决了这个问题。在我的案例中,我的项目中有两个hive-Metoreore依赖项,然后我排除了一个hive-Metoreore依赖项。它有用。