Zeppelin Spark解释器在org.apache.zeppelin.spark.Utils.invokeMethod

时间:2017-06-26 05:39:21

标签: apache-spark hadoop hive apache-zeppelin

我有一个带有hadoop 2.2的集群,spark 2.1.1,hive 2.1.1,Zeppelin 0.7.2

在zeppelin spark段落中,我使用%spark 1+1

执行

以下日志中出现了例外情况。怎么来的?有什么想法吗?

  

INFO [2017-06-23 06:26:40,727]({pool-2-thread-4}   HiveMetaStoreClient.java [open]:376) - 尝试连接到Metastore   与URI thrift://192.168.1.138:9083 INFO [2017-06-23 06:26:40,856]   ({pool-2-thread-4} HiveMetaStoreClient.java [open]:472) - 已连接到   metastore。错误[2017-06-23 06:26:40,997]({pool-2-thread-4}   Utils.java [invokeMethod中]:40)   java.lang.reflect.InvocationTargetException at   sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at   sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)     在   sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     在java.lang.reflect.Method.invoke(Method.java:606)at   org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)at   org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)at   org.apache.zeppelin.spark.SparkInterpreter.createSparkSession(SparkInterpreter.java:361)     在   org.apache.zeppelin.spark.SparkInterpreter.getSparkSession(SparkInterpreter.java:233)     在   org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:826)     在   org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)     在   org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer $ InterpretJob.jobRun(RemoteInterpreterServer.java:491)     在org.apache.zeppelin.scheduler.Job.run(Job.java:175)at   org.apache.zeppelin.scheduler.FIFOScheduler $ 1.run(FIFOScheduler.java:139)     在   java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:471)     在java.util.concurrent.FutureTask.run(FutureTask.java:262)at   java.util.concurrent.ScheduledThreadPoolExecutor中的$ ScheduledFutureTask.access $ 201(ScheduledThreadPoolExecutor.java:178)     在   java.util.concurrent.ScheduledThreadPoolExecutor中的$ ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)     在   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)     在   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:615)     在java.lang.Thread.run(Thread.java:745)引起:   java.lang.IllegalArgumentException:实例化时出错   ' org.apache.spark.sql.hive.HiveSessionState':at   org.apache.spark.sql.SparkSession $ .ORG $阿帕奇$火花$ SQL $ SparkSession $$反映(SparkSession.scala:981)     在   org.apache.spark.sql.SparkSession.sessionState $ lzycompute(SparkSession.scala:110)     在   org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:109)     在   org.apache.spark.sql.SparkSession $生成器$$ anonfun $ getOrCreate $ 5.apply(SparkSession.scala:878)     在   org.apache.spark.sql.SparkSession $生成器$$ anonfun $ getOrCreate $ 5.apply(SparkSession.scala:878)     在   scala.collection.mutable.HashMap $$ anonfun $ $的foreach 1.适用(HashMap.scala:99)     在   scala.collection.mutable.HashMap $$ anonfun $ $的foreach 1.适用(HashMap.scala:99)     在   scala.collection.mutable.HashTable $ class.foreachEntry(HashTable.scala:230)     在scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)     在scala.collection.mutable.HashMap.foreach(HashMap.scala:99)at   org.apache.spark.sql.SparkSession $ Builder.getOrCreate(SparkSession.scala:878)     ... 20更多引起:java.lang.reflect.InvocationTargetException     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native   方法)at   sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)     在   sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)     在   org.apache.spark.sql.SparkSession $ .ORG $阿帕奇$火花$ SQL $ SparkSession $$反映(SparkSession.scala:978)     ... 30更多引起:java.lang.IllegalArgumentException:错误   实例化' org.apache.spark.sql.hive.HiveExternalCatalog':     在   org.apache.spark.sql.internal.SharedState $ .ORG $阿帕奇$火花$ SQL $内部$对sharedState $$反映(SharedState.scala:169)     在   。org.apache.spark.sql.internal.SharedState(SharedState.scala:86)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $对sharedState 1.适用(SparkSession.scala:101)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $对sharedState 1.适用(SparkSession.scala:101)     在scala.Option.getOrElse(Option.scala:121)at   org.apache.spark.sql.SparkSession.sharedState $ lzycompute(SparkSession.scala:101)     在   org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:100)     在   。org.apache.spark.sql.internal.SessionState(SessionState.scala:157)     在   。org.apache.spark.sql.hive.HiveSessionState(HiveSessionState.scala:32)     ... 35更多引起:java.lang.reflect.InvocationTargetException     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native   方法)at   sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)     在   sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)     在   org.apache.spark.sql.internal.SharedState $ .ORG $阿帕奇$火花$ SQL $内部$对sharedState $$反映(SharedState.scala:166)     ... 43更多引起:java.lang.reflect.InvocationTargetException     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native   方法)at   sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)     在   sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)     在   org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)     在   org.apache.spark.sql.hive.HiveUtils $ .newClientForMetadata(HiveUtils.scala:358)     在   org.apache.spark.sql.hive.HiveUtils $ .newClientForMetadata(HiveUtils.scala:262)     在   。org.apache.spark.sql.hive.HiveExternalCatalog(HiveExternalCatalog.scala:66)     ... 48更多引起:java.lang.AbstractMethodError:   org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider.getProxy()Lorg /阿帕奇/ hadoop的/ IO /重试/ FailoverProxyProvider $ ProxyInfo;     在   org.apache.hadoop.io.retry.RetryInvocationHandler。(RetryInvocationHandler.java:73)     在   org.apache.hadoop.io.retry.RetryInvocationHandler。(RetryInvocationHandler.java:64)     在org.apache.hadoop.io.retry.RetryProxy.create(RetryProxy.java:58)     在   org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:147)     在org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:510)at   org.apache.hadoop.hdfs.DFSClient。(DFSClient.java:453)at   org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)     在   org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596)     在org.apache.hadoop.fs.FileSystem.access $ 200(FileSystem.java:91)at   org.apache.hadoop.fs.FileSystem $ Cache.getInternal(FileSystem.java:2630)     在org.apache.hadoop.fs.FileSystem $ Cache.get(FileSystem.java:2612)     在org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)at   org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)at at   org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:505)     在   。org.apache.spark.sql.hive.client.HiveClientImpl(HiveClientImpl.scala:188)     ... 56更多信息[2017-06-23 06:26:40,998]({pool-2-thread-4}   SparkInterpreter.java [createSparkSession]:362) - 创建了Spark会话   与Hive支持ERROR [2017-06-23 06:26:40,998]({pool-2-thread-4}   Job.java [run]:181) - 作业失败java.lang.NullPointerException at   org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:38)at   org.apache.zeppelin.spark.Utils.invokeMethod(Utils.java:33)at   org.apache.zeppelin.spark.SparkInterpreter.createSparkContext_2(SparkInterpreter.java:391)     在   org.apache.zeppelin.spark.SparkInterpreter.createSparkContext(SparkInterpreter.java:380)     在   org.apache.zeppelin.spark.SparkInterpreter.getSparkContext(SparkInterpreter.java:146)     在   org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:828)     在   org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70)     在   org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer $ InterpretJob.jobRun(RemoteInterpreterServer.java:491)     在org.apache.zeppelin.scheduler.Job.run(Job.java:175)at   org.apache.zeppelin.scheduler.FIFOScheduler $ 1.run(FIFOScheduler.java:139)     在   java.util.concurrent.Executors $ RunnableAdapter.call(Executors.java:471)     在java.util.concurrent.FutureTask.run(FutureTask.java:262)at   java.util.concurrent.ScheduledThreadPoolExecutor中的$ ScheduledFutureTask.access $ 201(ScheduledThreadPoolExecutor.java:178)     在   java.util.concurrent.ScheduledThreadPoolExecutor中的$ ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)     在   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)     在   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:615)     at java.lang.Thread.run(Thread.java:745)INFO [2017-06-23   06:26:40,999]({pool-2-thread-4}   SchedulerFactory.java [jobFinished]:137) - 工作   remoteInterpretJob_1498199191018由调度程序完成   org.apache.zeppelin.spark.SparkInterpreter1807637042

0 个答案:

没有答案