这是Save Spark dataframe as dynamic partitioned table in Hive的后续内容。我试图在答案中使用建议,但无法在Spark 1.6.1中使用它
我正在尝试从`DataFrame以编程方式创建分区。以下是相关代码(改编自Spark测试):
hc.setConf("hive.metastore.warehouse.dir", "tmp/tests")
// hc.setConf("hive.exec.dynamic.partition", "true")
// hc.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
hc.sql("create database if not exists tmp")
hc.sql("drop table if exists tmp.partitiontest1")
Seq(2012 -> "a").toDF("year", "val")
.write
.partitionBy("year")
.mode(SaveMode.Append)
.saveAsTable("tmp.partitiontest1")
hc.sql("show partitions tmp.partitiontest1").show
完整档案在此处:https://gist.github.com/SashaOv/7c65f03a51c7e8f9c9e018cd42aa4c4a
在文件系统上创建了分区文件,但是Hive抱怨表没有被分区:
======================
HIVE FAILURE OUTPUT
======================
SET hive.support.sql11.reserved.keywords=false
SET hive.metastore.warehouse.dir=tmp/tests
OK
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Table tmp.partitiontest1 is not a partitioned table
======================
看起来根本原因是org.apache.spark.sql.hive.HiveMetastoreCatalog.newSparkSQLSpecificMetastoreTable
总是创建带有空分区的表。
任何帮助推动这一进展的人都会受到赞赏。
编辑:也创建了SPARK-14927
答案 0 :(得分:1)
我找到了一个解决方法:如果你预先创建表,那么saveAsTable()不会搞乱它。以下是有效的:
hc.setConf("hive.metastore.warehouse.dir", "tmp/tests")
// hc.setConf("hive.exec.dynamic.partition", "true")
// hc.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
hc.sql("create database if not exists tmp")
hc.sql("drop table if exists tmp.partitiontest1")
// Added line:
hc.sql("create table tmp.partitiontest1(val string) partitioned by (year int)")
Seq(2012 -> "a").toDF("year", "val")
.write
.partitionBy("year")
.mode(SaveMode.Append)
.saveAsTable("tmp.partitiontest1")
hc.sql("show partitions tmp.partitiontest1").show
此解决方法适用于1.6.1但不适用于1.5.1