如何激活提交由IDE编写的hiveContext?

时间:2017-08-13 03:51:12

标签: apache-spark hive

我正在尝试在Spark Cluster上部署包含hiveContext的代码。

./spark-submit --class com.dt.sparkSQL.DataFrameToHive --master spark://SparkMaster:7077 /root/Documents/DataFrameToHive.jar 但这是问题

17/08/13 10:29:46 INFO hive.metastore: Trying to connect to metastore with URI thrift://SparkMaster:9083
17/08/13 10:29:46 WARN hive.metastore: Failed to connect to the MetaStore Server...
17/08/13 10:29:46 INFO hive.metastore: Waiting 1 seconds before next connection attempt.
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

当我做火花壳时

./spark-shell  --master spark://SparkMaster:7077

我可以成功连接SparkMaster:9083。这是我的spark / conf / hive-site.xml

<configuration>
<property>
        <name>hive.metastore.uris</name>
        <value>thrift://SparkMaster:9083</value>
        <description>thrift URI for the remote metastore.Used by metastore client to connect to remote metastore. </description>
</property>
</configuration>

我的问题是为什么它会与SparkMaster连接:9083当我进行spark-submit时,SparkMaster有什么问题:9083?以下是IDE上的代码

package com.dt.sparkSQL

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.hive.HiveContext
object DataFrameToHive {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf()
    conf.setAppName("DataFrameToHive").setMaster("spark://SparkMaster:7077")
    val sc = new SparkContext(conf)
    val hiveContext = new HiveContext(sc)
    hiveContext.sql("use userdb")
    hiveContext.sql("DROP TABLE IF EXISTS people")
    hiveContext.sql("CREATE TABLE IF NOT EXISTS people(name STRING, age INT)ROW FORMAT DELIMITED FIELDS TERMINATED BY '\\t' LINES TERMINATED BY '\\n'")
    hiveContext.sql("LOAD DATA LOCAL INPATH '/root/Documents/people.txt' INTO TABLE people")
    hiveContext.sql("use userdb")
    hiveContext.sql("DROP TABLE IF EXISTS peopleScores")
    hiveContext.sql("CREATE TABLE IF NOT EXISTS peopleScores(name STRING, score INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\\t' LINES TERMINATED BY '\\n'")
    hiveContext.sql("LOAD DATA LOCAL INPATH '/root/Documents/peopleScore.txt' INTO TABLE peopleScores")
    val resultDF = hiveContext.sql("select pi.name,pi.age,ps.score "
      +" from people pi join peopleScores ps on pi.name=ps.name"
      +" where ps.score>90")
    hiveContext.sql("drop table if exists peopleResult")
    resultDF.saveAsTable("peopleResult")
    val dataframeHive = hiveContext.table("peopleResult")
    dataframeHive.show()
  }
}
` 

1 个答案:

答案 0 :(得分:0)

我已成功解决了这个问题。部署hiveContext与普通的jar有点不同。

./spark-submit  --class com.dt.sparkSQL.DataFrameToHive --files /usr/local/hive/apache-hive-1.2.1-bin/conf/hive-site.xml   --master spark://SparkMaster:7077  /root/Documents/DataFrameToHive.jar