首先,我在spark中启动thrift服务器。 /sbin/start-thriftserver.sh
并且deamon开始了。
hadoop 13015 1 99 13:52 pts/1 00:00:09 /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java -cp /home/hadoop/spark/lib/hive-jdbc-0.13.0.jar:/home/hadoop/spark-1.4.1-bin-hadoop2.6/sbin/../conf/:/home/hadoop/spark-1.4.1-bin-hadoop2.6/lib/spark-assembly-1.4.1-hadoop2.6.0.jar:/home/hadoop/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/home/hadoop/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/home/hadoop/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar
-Xms512m -Xmx512m -XX:MaxPermSize=256m org.apache.spark.deploy.SparkSubmit --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 spark-internal
之后,我开始/bin/pyspark
我的hive版本是0.13.1,
spark版本是1.4.1,
hadoop版本是2.7
spark classpath在下面。
SPARK_CLASSPATH = /home/account/spark/lib/hive-jdbc-0.13.0.jar: /home/account/spark/lib/hive-exec-0.13.0.jar: /home/account/spark/lib/hive-metastore-0.13.0.jar: /home/account/spark/lib/hive-service-0.13.0.jar: /home/account/spark/lib/libfb303-0.9.0.jar: /home/account/spark/lib/log4j-1.2.16.jar
在pyspark(python-shell)中,我编写了这段代码。
>>> df = sqlContext.load(source="jdbc",driver="org.apache.hive.jdbc.HiveDriver", url="jdbc:hive2://IP:10000/default", dbtable="default.test")
但它没有用,我收到了这个错误。我该如何解决这个错误?
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dev/user/ja/spark/python/pyspark/sql/context.py", line 458, in load
return self.read.load(path, source, schema, **options)
File "/home/dev/user/ja/spark/python/pyspark/sql/readwriter.py", line 112, in load
return self._df(self._jreader.load())
File "/home/dev/user/ja/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
File "/home/dev/user/ja/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o29.load.
: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveResultSetMetaData.isSigned(HiveResultSetMetaData.java:141)
at org.apache.spark.sql.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:132)
at org.apache.spark.sql.jdbc.JDBCRelation.<init>(JDBCRelation.scala:128)
at org.apache.spark.sql.jdbc.DefaultSource.createRelation(JDBCRelation.scala:113)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:269)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:207)
at java.lang.Thread.run(Thread.java:722)
我认为hive不支持HiveResultSetMetaData.isSigned
方法。
但我不知道如何解决这个错误。请帮忙..
谢谢
答案 0 :(得分:1)
它不确定。但我回答了我的问题。
我认为它是由版本造成的。 当我执行下面的命令时,我得到&#34;方法不支持&#34;错误。
但是当我在spark-1.3.1上执行此命令时,它已经工作了。
>>> df = sqlContext.load(source="jdbc",driver="org.apache.hive.jdbc.HiveDriver", url="jdbc:hive2://IP:10000/default", dbtable="default.test")
所以我认为问题是版本。
但这是我的猜测。
此页面可能对您有所帮助。 http://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/ds_Hive/jdbc-hs2.html