导入SapSQLContext失败[val vc = new SapSQLContext(sc)]

时间:2016-03-11 04:06:17

标签: vora

我在HortonWorks hadoop上安装了最新版本的VORA和扩展。我在尝试导入SAP SQL上下文时遇到错误java.lang.NoSuchMethodError。

下面附带控制台的输出:

scala> import org.apache.spark.sql.SapSQLContext

import org.apache.spark.sql.SapSQLContext

scala> val vc = new SapSQLContext(sc)

java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry.<init>(Lorg/apache/spark/sql/catalyst/CatalystConf;)V
        at org.apache.spark.sql.extension.ExtendableSQLContext.functionRegistry$lzycompute(ExtendableSQLContext.scala:31)
        at org.apache.spark.sql.extension.ExtendableSQLContext.functionRegistry(ExtendableSQLContext.scala:30)
        at org.apache.spark.sql.extension.ExtendableSQLContext.functionRegistry(ExtendableSQLContext.scala:18)
        at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)
        at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:296)
        at org.apache.spark.sql.extension.ExtendableSQLContext.<init>(ExtendableSQLContext.scala:18)
        at org.apache.spark.sql.SapSQLContext.<init>(SapSQLContext.scala:18)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:22)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:27)
        at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:29)
        at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31)
        at $iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
        at $iwC$$iwC$$iwC.<init>(<console>:35)
        at $iwC$$iwC.<init>(<console>:37)
        at $iwC.<init>(<console>:39)
        at <init>(<console>:41)
        at .<init>(<console>:45)
        at .<clinit>(<console>)
        at .<init>(<console>:7)
        at .<clinit>(<console>)
        at $print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
        at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
        at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
        at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
        at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
        at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
        at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
        at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
        at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
        at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
        at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
        at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
        at org.apache.spark.repl.Main$.main(Main.scala:31)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

已安装堆栈:

Service             Version 

HDFS                2.7.1.2.3

MapReduce2          2.7.1.2.3   

YARN                2.7.1.2.3   

Tez                 0.7.0.2.3   

Hive                1.2.1.2.3   

HBase               1.1.1.2.3   

Pig                 0.15.0.2.3  

Sqoop               1.4.6.2.3   

Oozie               4.2.0.2.3   

ZooKeeper           3.4.6.2.3   

Falcon              0.6.1.2.3

Storm               0.10.0  

Flume               1.5.2.2.3

Ambari Metrics      0.1.0   

Kafka              0.8.2.2.3    

Mahout              0.9.0.2.3   

Spark               1.5.2   

SAP HANA Vora       1.1.25.37

对此事的任何帮助将不胜感激。

TIA 戈帕尔

1 个答案:

答案 0 :(得分:0)

您的版本列表显示您正在运行Spark 1.5.2和Vora 1.1 Patch 1(Ambari显示1.1.25.37)。

该错误表示您仍在使用早期版本的Vora数据源。您能否验证您是否使用Vora 1.1 Patch 1数据源(spark-sap-datasources-1.2.10-assembly.jar)?

请注意: Vora 1.1补丁1与Apache Spark 1.5.2完全兼容。但是,HDP2.3.4附带的Spark1.5.2版本与Apache Spark1.5.2版本不同。 HDP-Spark1.5.2版本存在2个已知问题:(1)使用Vora Thriftserver时(2)使用Vora解释器进行Zeppelin时。如果你想使用其中任何一个,我建议使用安装在Ambari之外的Apache Spark 1.5.2。 使用Vora1.2将解决这两个问题,并且可以使用HDP-Spark1.5.2。