如何使用Spark2的SparkSession查询存储在Hive表中的数据?

时间:2016-08-29 08:09:31

标签: scala maven hive apache-spark-sql apache-spark-2.0

我正在尝试从Spark2查询存储在Hive表中的数据。环境:1.cloudera-quickstart-vm-5.7.0-0-vmware 2.带有Scala2.11.8插件的Eclipse 3. Spark2和Maven

我没有更改spark默认配置。我是否需要在Spark或Hive中配置任何内容?

代码

import org.apache.spark._
import org.apache.spark.sql.SparkSession
object hiveTest {
 def main (args: Array[String]){
   val sparkSession = SparkSession.builder.
      master("local")
      .appName("HiveSQL")
      .enableHiveSupport()
      .getOrCreate()

  val data=  sparkSession2.sql("select * from test.mark")
}
}

获取错误

16/08/29 00:18:10 INFO SparkSqlParser: Parsing command: select * from test.mark
Exception in thread "main" java.lang.ExceptionInInitializerError
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:48)
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:47)
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:54)
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog(HiveSharedState.scala:54)
    at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:50)
    at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
    at org.apache.spark.sql.hive.HiveSessionState$$anon$1.<init>(HiveSessionState.scala:63)
    at org.apache.spark.sql.hive.HiveSessionState.analyzer$lzycompute(HiveSessionState.scala:63)
    at org.apache.spark.sql.hive.HiveSessionState.analyzer(HiveSessionState.scala:62)
    at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:49)
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
    at hiveTest$.main(hiveTest.scala:34)
    at hiveTest.main(hiveTest.scala)
Caused by: java.lang.IllegalArgumentException: requirement failed: Duplicate SQLConfigEntry. spark.sql.hive.convertCTAS has been registered
    at scala.Predef$.require(Predef.scala:224)
    at org.apache.spark.sql.internal.SQLConf$.org$apache$spark$sql$internal$SQLConf$$register(SQLConf.scala:44)
    at org.apache.spark.sql.internal.SQLConf$SQLConfigBuilder$$anonfun$apply$1.apply(SQLConf.scala:51)
    at org.apache.spark.sql.internal.SQLConf$SQLConfigBuilder$$anonfun$apply$1.apply(SQLConf.scala:51)
    at org.apache.spark.internal.config.TypedConfigBuilder$$anonfun$createWithDefault$1.apply(ConfigBuilder.scala:122)
    at org.apache.spark.internal.config.TypedConfigBuilder$$anonfun$createWithDefault$1.apply(ConfigBuilder.scala:122)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.internal.config.TypedConfigBuilder.createWithDefault(ConfigBuilder.scala:122)
    at org.apache.spark.sql.hive.HiveUtils$.<init>(HiveUtils.scala:103)
    at org.apache.spark.sql.hive.HiveUtils$.<clinit>(HiveUtils.scala)
    ... 14 more

感谢任何建议

感谢
罗宾

2 个答案:

答案 0 :(得分:0)

这就是我正在使用的:

import org.apache.spark.sql.SparkSession
object LoadCortexDataLake extends App {
 val spark = SparkSession.builder().appName("Cortex-Batch").enableHiveSupport().getOrCreate()
spark.read.parquet(file).createOrReplaceTempView("temp")
       spark.sql(s"insert overwrite table $table_nm partition(year='$yr',month='$mth',day='$dt') select * from temp")

我认为你应该使用&#39; sparkSession.sql&#39;而不是&#39; sparkSession2.sql&#39;

答案 1 :(得分:0)

    public static IApplicationBuilder UseCorrelationProperties(this IApplicationBuilder app)
    {
        return app.Use(async (context, next) =>
            {

                var requestTelemetry = context.Features.Get<RequestTelemetry>();
                if (requestTelemetry != null)
                {
                    requestTelemetry.Context.Properties["CorrelationId"] = correlationId;
                }

                await next.Invoke();
            });
    }