尝试使用pyspark连接到phoenix表并收到以下错误

时间:2017-04-22 02:26:25

标签: hadoop phoenix

我的阅读凤凰表: sql_sc.read.format(“org.apache.phoenix.spark”)。option(“table”,tablename).option(“zkUrl”,“10.0.11.21:21810”)。load() 错误: Traceback(最近一次调用最后一次):   文件“/bdaas/exe/healthcare/hl7visualization.py”,第42行,in     hl7 = phoenix_sparkdata(spark_app ='hl7-app',spark_master ='local',table_name ='hl7table_v2_3')   在 init 中输入第19行“/bdaas/exe/healthcare/hl7visualization.py”     self.dataframe = self.phoenix_getdataframe(table_name)   在phoenix_getdataframe中输入第41行“/bdaas/exe/healthcare/hl7visualization.py”     df = self.sql_sc.read.format(“org.apache.phoenix.spark”)。option(“table”,tablename).option(“zkUrl”,“10.0.11.21:218”)。load()   文件“/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py”,第139行,载入中   在电话中输入文件“/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py”,第813行   文件“/usr/hdp/2.4.2.0-258/spark/python/lib/pyspark.zip/pyspark/sql/utils.py”,第45行,装饰   在get_return_value中输入文件“/usr/hdp/2.4.2.0-258/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py”,第308行 py4j.protocol.Py4JJavaError:调用o43.load时发生错误。 :java.lang.NoSuchMethodError:com.fasterxml.jackson.module.scala.deser.BigDecimalDeserializer $ .handledType()Ljava / lang / Class;         在com.fasterxml.jackson.module.scala.deser.NumberDeserializers $。(ScalaNumberDeserializersModule.scala:49)         在com.fasterxml.jackson.module.scala.deser.NumberDeserializers $。(ScalaNumberDeserializersModule.scala)         在com.fasterxml.jackson.module.scala.deser.ScalaNumberDeserializersModule $ class。$ init $(ScalaNumberDeserializersModule.scala:61)         在com.fasterxml.jackson.module.scala.DefaultScalaModule。(DefaultScalaModule.scala:19)         在com.fasterxml.jackson.module.scala.DefaultScalaModule $。(DefaultScalaModule.scala:35)         在com.fasterxml.jackson.module.scala.DefaultScalaModule $。(DefaultScalaModule.scala)         在org.apache.spark.rdd.RDDOperationScope $。(RDDOperationScope.scala:81)         在org.apache.spark.rdd.RDDOperationScope $。(RDDOperationScope.scala)         在org.apache.spark.SparkContext.withScope(SparkContext.scala:714)         在org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1152)         在org.apache.phoenix.spark.PhoenixRDD。(PhoenixRDD.scala:46)         在org.apache.phoenix.spark.PhoenixRelation.schema(PhoenixRelation.scala:50)         在org.apache.spark.sql.execution.datasources.LogicalRelation。(LogicalRelation.scala:37)         在org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)         at java.lang.reflect.Method.invoke(Method.java:606)         在py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)         在py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)         在py4j.Gateway.invoke(Gateway.java:259)         at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)         在py4j.commands.CallCommand.execute(CallCommand.java:79)         在py4j.GatewayConnection.run(GatewayConnection.java:209)         在java.lang.Thread.run(Thread.java:745)

0 个答案:

没有答案