Derby Metastore目录在spark工作区中创建

时间:2017-06-13 11:49:12

标签: apache-spark hive apache-spark-sql

我安装了spark 2.1.0并与eclipse和hive2集成安装,并在Mysql中配置了Metastore,并在spark>>中放置了hive-site.xml文件。 conf文件夹。我正在尝试从eclipse访问已经存在于hive中的表。 当我执行program Metore文件夹并在spark工作区创建了derby.log文件时,eclipse控制台显示以下信息:

Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
17/06/13 18:26:43 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/06/13 18:26:43 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/06/13 18:26:43 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
17/06/13 18:26:43 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
17/06/13 18:26:43 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
17/06/13 18:26:43 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL

spark无法找到已配置的mysql Metastore数据库

也抛出错误

Exception in thread "main" java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':

代码:

import org.apache.spark.SparkContext, org.apache.spark.SparkConf
import com.typesafe.config._
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession

object hivecore {

  def main(args: Array[String]) {

val warehouseLocation = "hdfs://HADOOPMASTER:54310/user/hive/warehouse"

val spark = SparkSession
  .builder().master("local[*]")
  .appName("hivecore")
  .config("spark.sql.warehouse.dir", warehouseLocation)
  .enableHiveSupport()
  .getOrCreate()

import spark.implicits._
import spark.sql

sql("SELECT * FROM sample.source").show()

}
}

Build.sbt

libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.1.0"
libraryDependencies += "com.typesafe" % "config" % "1.3.0" 
libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.1.0"
libraryDependencies += "org.apache.spark" % "spark-hive_2.11" % "2.1.0"
libraryDependencies += "mysql" % "mysql-connector-java" % "5.1.42"

注意:我可以从Spark-shell

访问配置单元表

由于

1 个答案:

答案 0 :(得分:1)

当您放置context.setMaster(local)时,它可能不会查找您在群集中设置的spark配置;特别是当您从ECLIPSE触发它时。

用它制作一个罐子;并从cmd触发spark-submit --class <main class package> --master spark://207.184.161.138:7077 --deploy-mode client

主ip:spark://207.184.161.138:7077应替换为群集的ip和spark端口。

并且,请记住初始化HiveContext以触发对潜在HIVE的查询。

val hc = new HiveContext(sc)
hc.sql("SELECT * FROM ...")