当我开始会话时,Hive和HiveSession会出现一些错误

时间:2017-11-25 03:28:53

标签: java r hadoop apache-spark sparkr

我正在使用SparkR对财务进行分析。当我尝试获取SparkR的会话时,我收到以下错误。我不知道如何解决问题,谢谢。

  

sparkR.session()17/11/24 19:23:29错误r.RBackendHandler:   org.apache.spark.sql.api.r.SQLUtils上的getOrCreateSparkSession失败   java.lang.reflect.InvocationTargetException at   java.base / jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(母语   方法)at   java.base / jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)     在   java.base / jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)     在java.base / java.lang.reflect.Method.invoke(Method.java:564)at   org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)     在   org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)     在   org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)     在   io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)     在

     

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)     在   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)     在   io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)     在

     

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)     在   io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)     在   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)     在   io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)     在   io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)     在   io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)     在   io.netty.channel.DefaultChannelPipeline $ HeadContext.channelRead(DefaultChannelPipeline.java:1294)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)     在   io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)     在   io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)     在   io.netty.channel.nio.AbstractNioByteChannel $ NioByteUnsafe.read(AbstractNioByteChannel.java:131)     在   io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)     在   io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)     在   io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)     在io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)at   io.netty.util.concurrent.SingleThreadEventExecutor $ 2.run(SingleThreadEventExecutor.java:131)     在   io.netty.util.concurrent.DefaultThreadFactory $ DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)     在java.base / java.lang.Thread.run(Thread.java:844)引起:   java.lang.IllegalArgumentException:实例化时出错   ' org.apache.spark.sql.hive.HiveSessionStateBuilder':at   org.apache.spark.sql.SparkSession $ .ORG $阿帕奇$火花$ SQL $ SparkSession $$ instantiateSessionState(SparkSession.scala:1053)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $的sessionState 2.适用(SparkSession.scala:130)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $的sessionState 2.适用(SparkSession.scala:130)     在scala.Option.getOrElse(Option.scala:121)at   org.apache.spark.sql.SparkSession.sessionState $ lzycompute(SparkSession.scala:129)     在   org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)     在   org.apache.spark.sql.api.r.SQLUtils $$ anonfun $ setSparkContextSessionConf $ 2.适用(SQLUtils.scala:71)     在   org.apache.spark.sql.api.r.SQLUtils $$ anonfun $ setSparkContextSessionConf $ 2.适用(SQLUtils.scala:70)     在   scala.collection.TraversableLike $ WithFilter $$ anonfun $ $的foreach 1.适用(TraversableLike.scala:733)     在scala.collection.Iterator $ class.foreach(Iterator.scala:893)at   scala.collection.AbstractIterator.foreach(Iterator.scala:1336)at   scala.collection.IterableLike $ class.foreach(IterableLike.scala:72)at   scala.collection.AbstractIterable.foreach(Iterable.scala:54)at   scala.collection.TraversableLike $ WithFilter.foreach(TraversableLike.scala:732)     在   org.apache.spark.sql.api.r.SQLUtils $ .setSparkContextSessionConf(SQLUtils.scala:70)     在   org.apache.spark.sql.api.r.SQLUtils $ .getOrCreateSparkSession(SQLUtils.scala:63)     在   org.apache.spark.sql.api.r.SQLUtils.getOrCreateSparkSession(SQLUtils.scala)     ... 36更多引起:java.lang.IllegalArgumentException:无法执行   找到蜂巢罐连接到Metastore。请设置   spark.sql.hive.metastore.jars。在   org.apache.spark.sql.hive.HiveUtils $ .newClientForMetadata(HiveUtils.scala:302)     在   org.apache.spark.sql.hive.HiveUtils $ .newClientForMetadata(HiveUtils.scala:266)     在   org.apache.spark.sql.hive.HiveExternalCatalog.client $ lzycompute(HiveExternalCatalog.scala:66)     在   org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)     在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ databaseExists $ 1.适用$ MCZ $ SP(HiveExternalCatalog.scala:194)     在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ databaseExists $ 1.适用(HiveExternalCatalog.scala:194)     在   org.apache.spark.sql.hive.HiveExternalCatalog $$ anonfun $ databaseExists $ 1.适用(HiveExternalCatalog.scala:194)     在   org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)     在   org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)     在   org.apache.spark.sql.internal.SharedState.externalCatalog $ lzycompute(SharedState.scala:105)     在   org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)     在   org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)     在   org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog $ lzycompute(HiveSessionStateBuilder.scala:54)     在   org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)     在   org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)     在   org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)     在   org.apache.spark.sql.SparkSession $ .ORG $阿帕奇$火花$ SQL $ SparkSession $$ instantiateSessionState(SparkSession.scala:1050)     ... handleErrors中的52个错误(returnStatus,conn):
  java.lang.IllegalArgumentException:实例化时出错   ' org.apache.spark.sql.hive.HiveSessionStateBuilder':at   org.apache.spark.sql.SparkSession $ .ORG $阿帕奇$火花$ SQL $ SparkSession $$ instantiateSessionState(SparkSession.scala:1053)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $的sessionState 2.适用(SparkSession.scala:130)     在   org.apache.spark.sql.SparkSession $$ anonfun $ $的sessionState 2.适用(SparkSession.scala:130)     在scala.Option.getOrElse(Option.scala:121)at   org.apache.spark.sql.SparkSession.sessionState $ lzycompute(SparkSession.scala:129)     在   org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)     在   org.apache.spark.sql.api.r.SQLUtils $$ anonfun $ setSparkContextSessionConf $ 2.适用(SQLUtils.scala:71)     在   org.apache.spark.sql.api.r.SQLUtils $$ anonfun $ setSparkContextSessionConf $ 2.适用(SQLUtils.scala:70)     在   scala.collection.TraversableLike $ WithFilter $$ anonfun $ $的foreach 1.适用(TraversableLike.scala:733)     在scala.collection.Iterator $ class.foreach(Iterator.sca

0 个答案:

没有答案