我正在尝试使用Intellij Spark Scala将CSV或XML文件加载到预先存在的配置单元表中,然后在保存数据帧时在最后一步提供以下例外。
val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!")
xml.write.format("parquet").mode("overwrite").partitionBy("designation").insertInto("test2")
println("TABLE SAVED!!!!!!!")
线程“main”中的异常java.lang.NoSuchMethodException:org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path,java.lang.String,java.util .Map,boolean,int,boolean,boolean,boolean)
val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new HiveContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!")
xml.write.format("parquet")
.mode("overwrite")
.partitionBy("designation")
.saveAsTable("test2")
线程“main”中的异常java.lang.NoSuchMethodException:org.apache.hadoop.hive.ql.metadata.Hive.loadDynamicPartitions(org.apache.hadoop.fs.Path,java.lang.String,java.util .Map,boolean,int,boolean,boolean,boolean)
val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new SQLContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!") xml.write.format("parquet").mode("overwrite").partitionBy("designation").insertInto("test2")
println("TABLE SAVED!!!!!!!")
线程“main”中的异常org.apache.spark.sql.AnalysisException:找不到表:test2;
val sparkConf = new SparkConf().setAppName("TEST")
val sc = new SparkContext(sparkConf)
val hiveContext = new SQLContext(sc)
hiveContext.setConf("hive.exec.dynamic.partition", "true")
hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
println("CONFIG DONE!!!!!")
val xml = hiveContext.read.format("com.databricks.spark.xml").option("rowTag","employee").load("/PUBLIC_TABLESPACE/updatedtest1.xml")
println("XML LOADED!!!!!!") xml.write.format("parquet").mode("overwrite").partitionBy("designation").saveAsTable("test2")
println("TABLE SAVED!!!!!!!")
线程“main”中的异常java.lang.RuntimeException:使用SQLContext创建的表必须是TEMPORARY。改为使用HiveContext。
使用BUILD.SBT文件编辑:
BUILD.SBT File: name := "testonSpark"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.6.0"
libraryDependencies += "com.databricks" % "spark-csv_2.10" % "1.5.0"
libraryDependencies += "org.apache.spark" % "spark-hive_2.10" % "1.6.0"
答案 0 :(得分:-1)
尝试使用sbt文件
val sparkVersion = "1.6.0"
resolvers ++= Seq(
"apache-snapshots" at "http://repository.apache.org/snapshots/"
)
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % sparkVersion,
"org.apache.spark" %% "spark-sql" % sparkVersion,
"org.apache.spark" %% "spark-hive" % sparkVersion,
"org.apache.spark" %% "spark-mllib" % sparkVersion
)
libraryDependencies += "com.databricks" % "spark-csv_2.10" % "1.5.0"