使用DataFrame时Zeppelin在CDH 5.7.1上使用Spark 1.6.0 NullPointerException

时间:2017-01-18 09:14:52

标签: apache-spark cloudera-cdh apache-zeppelin

[注意:虽然这个问题没有答案,但不要只是经过。 Sebastian Piu的评论很有帮助。]

我已经使用Spark 1.6.0在cloudera CDH 5.7.1上安装了Zeppelin-0.6.2-bin-all

我在conf / zeppelin-env.sh和〜/ .bashrc中设置了这些环境变量

export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera/
export SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark/
export ZEPPELIN_HOME=/var/lib/zeppelin-0.6.2/

在新创建的笔记本上,使用以下命令的第一段是OK:

sc.parallelize(Seq(1,2,3,4,5))

结果是:

res0: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at parallelize at <console>:30

但第二段只是将RDD转换为DataFrame,因NullPointerException而失败:

sc.parallelize(Seq(1,2,3,4,5)).toDF("number")

结果转储为:

java.lang.NullPointerException
at org.apache.spark.sql.hive.client.ClientWrapper.conf(ClientWrapper.scala:205)
at org.apache.spark.sql.hive.HiveContext.hiveconf$lzycompute(HiveContext.scala:554)
at org.apache.spark.sql.hive.HiveContext.hiveconf(HiveContext.scala:553)
at org.apache.spark.sql.hive.HiveContext$$anonfun$configure$1.apply(HiveContext.scala:540)
at org.apache.spark.sql.hive.HiveContext$$anonfun$configure$1.apply(HiveContext.scala:539)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.sql.hive.HiveContext.configure(HiveContext.scala:539)
at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:252)
at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:239)
at org.apache.spark.sql.hive.HiveContext$$anon$2.<init>(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.catalog$lzycompute(HiveContext.scala:459)
at org.apache.spark.sql.hive.HiveContext.catalog(HiveContext.scala:458)
at org.apache.spark.sql.hive.HiveContext$$anon$3.<init>(HiveContext.scala:475)
at org.apache.spark.sql.hive.HiveContext.analyzer$lzycompute(HiveContext.scala:475)
at org.apache.spark.sql.hive.HiveContext.analyzer(HiveContext.scala:474)
at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:34)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:133)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.internalCreateDataFrame(SQLContext.scala:532)
at org.apache.spark.sql.SQLImplicits.intRddToDataFrameHolder(SQLImplicits.scala:185)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:30)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
at $iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC.<init>(<console>:45)
at $iwC.<init>(<console>:47)
at <init>(<console>:49)
at .<init>(<console>:53)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1045)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1326)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:821)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:852)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:800)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:953)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:1168)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:1111)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:1104)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

我已经测试过spark-shell中的命令是否正常:

scala> sc.parallelize(Seq(1,2,3,4,5)).toDF("number")
res0: org.apache.spark.sql.DataFrame = [number: int]

我已阅读document并认为“使用scala 2.10构建安装spark解释器”可能会有所帮助。

所以我输入了这些命令,但也失败了:

# bin/zeppelin-daemon.sh stop
# mv interpreter/spark interpreter/spark.bak
# bin/install-interpreter.sh --name spark --artifact org.apache.zeppelin:zeppelin-spark_2.10:0.6.2

回复是:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/var/lib/zeppelin-0.6.2/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/lib/zeppelin-0.6.2/lib/zeppelin-interpreter-0.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Install spark(org.apache.zeppelin:zeppelin-spark_2.10:0.6.2) to /var/lib/zeppelin-0.6.2/interpreter/spark ...
Exception in thread "main" java.lang.NullPointerException
    at org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:352)
    at org.apache.zeppelin.dep.DependencyResolver.getArtifactsWithDep(DependencyResolver.java:176)
    at org.apache.zeppelin.dep.DependencyResolver.loadFromMvn(DependencyResolver.java:129)
    at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:77)
    at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:94)
    at org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:86)
    at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:170)
    at org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:150)
    at org.apache.zeppelin.interpreter.install.InstallInterpreter.main(InstallInterpreter.java:275)

(注意:系统安装了maven-3.3.9,命令“mvn”正常)

我怀疑它是由错误的java版本引起的。所以我试过了

export JAVA_HOME=/usr/java/jdk1.8.0_05

export JAVA_HOME=/usr/java/jdk1.6.0_45

但两者都得到了相同的错误结果。

我也尝试降级到Zeppelin-0.6.1-bin-all,但结果相同。

我认为上述结果是可重复的,因为我已经在另一个“带有Spark 1.6.0的CDH 5.7.1”云中重试了它们,并且得到了相同的结果。

我怎样才能让它发挥作用?

0 个答案:

没有答案