许多搜索指向pyspark代码以在hive Metastore中创建表格,如:
hivecx.sql("...create table syntax that matches the dataframe...")
df.write.mode("overwrite").partitionBy('partition_colname').insertInto("national_dev.xh_claimline")
我尝试了很多写/保存/插入和模式的变体,但总是得到:
Caused by: java.io.FileNotFoundException: File does not exist: /user/hive/warehouse/national_dev.db/xh_claimline/000000_0
表目录存在于hadoop中,但000000_0子目录不存在。我以为这是因为桌子是空的,我还没有写到它。
hadoop fs -ls /user/hive/warehouse/national_dev.db/xh_claimline
Found 2 items
drwxrwxrwt - mryan hive 0 2017-03-20 12:26 /user/hive/warehouse/national_dev.db/xh_claimline/.hive-staging_hive_2017-03-20_12-26-35_382_2703713921168172595-1
drwxrwxrwt - mryan hive 0 2017-03-20 12:29 /user/hive/warehouse/national_dev.db/xh_claimline/.hive-staging_hive_2017-03-20_12-29-40_775_73045420253990110-1
On Cloudera,Spark版本: 17/03/20 11:45:21 INFO spark.SparkContext:运行Spark版本1.6.0
答案 0 :(得分:0)
查看insert into语句,这里使用数据写模式overwrite
,然后不需要写入插入。直接使用saveAsTable
格式的parquet
。这是修改后的声明: -
df = hivecx.sql("...create table syntax that matches the dataframe...")
df.write.mode("overwrite").format("parquet").partitionBy('partition_colname').saveAsTable("national_dev.xh_claimline")