我正在使用" yarn-client"查询Hive表时遇到问题。掌握如下。
如果我使用master =" local"
,此代码可以正常工作from __future__ import print_function
from pyspark import SparkContext, SparkConf
from pyspark.sql import HiveContext
conf = SparkConf().setAppName("My App")
sc=SparkContext(master="yarn-client")
sc.setLogLevel("WARN")
sqlContext = HiveContext(sc)
test = sqlContext.sql("SELECT * FROM table WHERE column = 'x'").collect()
在这种情况下,我得到以下错误并导致。
17/06/26 12:22:43 ERROR TaskSetManager: Task 1 in stage 1.0 failed 4 times; aborting job
Traceback (most recent call last):
File "/home/user/hellospark_2_.py", line 31, in <module>
test = sqlContext.sql("SELECT * FROM table WHERE column = 'x'").collect()
File "/opt/mapr/spark/spark-1.6.1/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 280, in collect
File "/opt/mapr/spark/spark-1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
File "/opt/mapr/spark/spark-1.6.1/python/lib/pyspark.zip/pyspark/sql/utils.py", line 45, in deco
File "/opt/mapr/spark/spark-1.6.1/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o44.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0 (): java.io.InvalidClassException: org.apache.spark.sql.catalyst.expressions.Literal; local class incompatible: stream classdesc serialVersionUID = -4259705229845269663, local class serialVersionUID = 3305180847846277455
经过一些测试后,我发现只有在使用.collect()时才会出现此错误,并且仅在从Hive数据帧查询行时才会发生此错误。查询名称,表格列表或表格描述可以正常工作。
从我在网上看到的,它可能是由Spark,Scala或Hadoop版本不匹配引起的。在我的测试中,我发现Spark在本地或yarn-client(版本1.6.1)中运行时是相同的。我也在本地和远程查看了dataframe.py文件,但在两种情况下看起来都是一样的。
我很感激有任何见解或帮助解决这个问题!
感谢您的时间。