我正在尝试将HBase与Spark集成。我做了两种类型的集成,但收到错误。首先,我将所有HBase lib jar复制并粘贴到Spark jars文件夹中。一些HBase罐子与Spark罐子冲突,所以我保留了火花罐子。还将export SPARK_CLASSPATH=/usr/local/spark/spark-2.1.0/jars/*
添加到spark-env.sh并将export HADOOP_USER_CLASSPATH_FIRST=true
添加到bashrc文件中,但在启动spark shell时出现以下IncompatibleClassChangeError
错误:
hduser@master:/usr/local/spark/spark-2.1.0/bin$ ./spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
[ERROR] Terminal initialization failed; falling back to unsupported
java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but interface was expected
at jline.TerminalFactory.create(TerminalFactory.java:101)
at jline.TerminalFactory.get(TerminalFactory.java:158)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:229)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:221)
at jline.console.ConsoleReader.<init>(ConsoleReader.java:209)
at scala.tools.nsc.interpreter.jline.JLineConsoleReader.<init>(JLineReader.scala:62)
at scala.tools.nsc.interpreter.jline.InteractiveReader.<init>(JLineReader.scala:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiater$1$1.apply(ILoop.scala:858)
at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiater$1$1.apply(ILoop.scala:855)
at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$mkReader$1(ILoop.scala:862)
at scala.tools.nsc.interpreter.ILoop$$anonfun$21$$anonfun$apply$9.apply(ILoop.scala:873)
at scala.tools.nsc.interpreter.ILoop$$anonfun$21$$anonfun$apply$9.apply(ILoop.scala:873)
at scala.util.Try$.apply(Try.scala:192)
at scala.tools.nsc.interpreter.ILoop$$anonfun$21.apply(ILoop.scala:873)
at scala.tools.nsc.interpreter.ILoop$$anonfun$21.apply(ILoop.scala:873)
at scala.collection.immutable.Stream.map(Stream.scala:418)
at scala.tools.nsc.interpreter.ILoop.chooseReader(ILoop.scala:873)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$2.apply(ILoop.scala:914)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:914)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:909)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:909)
at org.apache.spark.repl.Main$.doMain(Main.scala:68)
at org.apache.spark.repl.Main$.main(Main.scala:51)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/01/29 13:02:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/29 13:02:27 WARN spark.SparkConf:
SPARK_CLASSPATH was detected (set to '/usr/local/spark/spark-2.1.0/jars/*').
This is deprecated in Spark 1.0+.
Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath
17/01/29 13:02:27 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/usr/local/spark/spark-2.1.0/jars/*' as a work-around.
17/01/29 13:02:27 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/usr/local/spark/spark-2.1.0/jars/*' as a work-around.
17/01/29 13:02:36 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1485720148707).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
第二次尝试是遵循(hbase和spark都处于初始状态,没有复制和粘贴罐):
hduser@master:~$ HBASE_PATH='/usr/local/hbase/hbase-1.2.4/bin/hbase classpath'
hduser@master:~$ cd /usr/local/spark/spark-2.1.0/bin
hduser@master:/usr/local/spark/spark-2.1.0/bin$ ./spark-shell --driver-class-path $HBASE_PATH
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/01/29 16:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/29 16:41:03 WARN metastore.ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.0.0.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1485733255875).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.apache.hadoop.hbase.mapreduce.TableInputFormat
<console>:23: error: object hbase is not a member of package org.apache.hadoop
import org.apache.hadoop.hbase.mapreduce.TableInputFormat
^
scala>
这次我得到object hbase is not a member of package org.apache.hadoop
。
请帮我将HBase 1.2.4与Spark 2.1.0集成。