无法通过Spark运行Hive sql

时间:2018-03-13 11:34:43

标签: apache-spark hive

我正在尝试通过spark代码执行配置单元SQL,但它抛出了下面提到的错误。我只能从hive表中选择数据。

我的火花版是1.6.1 我的Hive版本是1.2.1

命令运行spark submit

spark-submit --master local [8] --files /srv/data/app/spark/conf/hive-site.xml test_hive.py

代码: -

    from pyspark import SparkContext, SparkConf
    from pyspark.sql import SQLContext
    from pyspark.sql import HiveContext
    sc=SparkContext()
    sqlContext = SQLContext(sc)
    HiveContext = HiveContext(sc)
    #HiveContext.setConf("yarn.timeline-service.enabled","false")
    #HiveContext.sql("SET spark.sql.crossJoin.enabled=false")
    HiveContext.sql("use default")
    HiveContext.sql("TRUNCATE TABLE default.test_table")
    HiveContext.sql("LOAD DATA LOCAL INPATH '/srv/data/data_files/*' OVERWRITE INTO TABLE default.test_table")
    df = HiveContext.sql("select * from version")

    for x in df.collect():
            print x

Error:-


17386 [Thread-3] ERROR org.apache.spark.sql.hive.client.ClientWrapper  -
======================
HIVE FAILURE OUTPUT
======================
SET spark.sql.inMemoryColumnarStorage.compressed=true
SET spark.sql.thriftServer.incrementalCollect=true
SET spark.sql.hive.convertMetastoreParquet=false
SET spark.sql.broadcastTimeout=800
SET spark.sql.hive.thriftServer.singleSession=true
SET spark.sql.inMemoryColumnarStorage.partitionPruning=true
SET spark.sql.crossJoin.enabled=true
SET hive.support.sql11.reserved.keywords=false
SET spark.sql.crossJoin.enabled=false
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. ClassCastException: attempting to castjar:file:/srv/data/OneClickProvision_1.2.2/files/app/spark/assembly/target/scala-2.10/spark-assembly-1.6.2-SNAPSHOT-hadoop2.6.1.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/srv/data/OneClickProvision_1.2.2/files/app/spark/assembly/target/scala-2.10/spark-assembly-1.6.2-SNAPSHOT-hadoop2.6.1.jar!/javax/ws/rs/ext/RuntimeDelegate.class

======================
END HIVE FAILURE OUTPUT
======================

Traceback (most recent call last):
  File "/home/iip/hist_load.py", line 10, in <module>
    HiveContext.sql("TRUNCATE TABLE default.tbl_wmt_pos_file_test")

 File "/srv/data/OneClickProvision_1.2.2/files/app/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 580, in sql
  File "/srv/data/OneClickProvision_1.2.2/files/app/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/srv/data/OneClickProvision_1.2.2/files/app/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 45, in deco
  File "/srv/data/OneClickProvision_1.2.2/files/app/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o46.sql.
: org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. ClassCastException: attempting to castjar:file:/srv/data/OneClickProvision_1.2.2/files/app/spark/assembly/target/scala-2.10/spark-assembly-1.6.2-SNAPSHOT-hadoop2.6.1.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/srv/data/OneClickProvision_1.2.2/files/app/spark/assembly/target/scala-2.10/spark-assembly-1.6.2-SNAPSHOT-hadoop2.6.1.jar!/javax/ws/rs/ext/RuntimeDelegate.class

2 个答案:

答案 0 :(得分:0)

  

我只能从hive表中选择数据。

这是完全正常和预期的行为。 Spark SQL并非旨在与HiveQL完全兼容或实现Hive提供的全套功能。

总的来说,保留了一些兼容性,但由于Spark SQL收敛于SQL 2003标准,因此无法保证将来保留这些兼容性。

答案 1 :(得分:0)

来自post here

由于ClassCastExceptionYARN jar中同一类的不同版本存在冲突,<{1}}的Spark作业失败。

来自HiveContext中的Set below属性:

SPARK