每个文件的hive分区

时间:2017-10-27 07:32:08

标签: hadoop hive hdfs

我不希望文件堆积太多 我过去遇到了错误,因为hdfs文件的数量超出了限制 我怀疑目录的数量包含在最大文件号中。 所以I want to partitioned table with one file not directory

我知道的分区目录:

/test/test.db/test_log/create_date=2013-04-09/2013-04-09.csv.gz
/test/test.db/test_log/create_date=2013-04-10/2013-04-10.csv.gz

我尝试像这样添加分区。它有效。

ALTER TABLE test_log ADD PARTITION (create_date='2013-04-09') LOCATION '/test/tmp/test_log/2013-04-09.csv.gz'

我想要的分区的文件路径:

/test/test.db/test_log/create_date=2013-04-09.csv.gz
/test/test.db/test_log/create_date=2013-04-10.csv.gz

我尝试像这样添加分区

ALTER TABLE test_log ADD PARTITION (create_date='2013-04-09') LOCATION '/test/tmp/test_log/2013-04-09.csv.gz'

引发了错误

======================
HIVE FAILURE OUTPUT
======================
SET hive.support.sql11.reserved.keywords=false
SET hive.metastore.warehouse.dir=hdfs:/test/test.db
OK
OK
OK
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:hdfs://ABCDEFG/test/tmp/test_log/2013-04-09.csv.gz is not a directory or unable to create one)

======================
END HIVE FAILURE OUTPUT
======================

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/spark/python/pyspark/sql/context.py", line 580, in sql
    return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
    return f(*a, **kw)
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o32.sql.
: org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:hdfs://ABCDEFG/test/tmp/test_log/2013-04-09.csv.gz is not a directory or unable to create one)

表模式就像这样

CREATE TABLE IF NOT EXISTS test_log (
    testid INT, 
    create_dt STRING
) 
PARTITIONED BY (create_date STRING) 
ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n' 
STORED AS TEXTFILE
  • 由于我的隐私,我在命令中转换了一些路径,因此可以存在打字错误。 请不要注意文件名

1 个答案:

答案 0 :(得分:0)

在创建/更改配置单元表的位置时,您应该只指定文件夹

ALTER TABLE test_log ADD PARTITION (create_date='2013-04-09') LOCATION '/test/tmp/test_log/create_date=2013-04-09/'

将文件放在

位置
hadoop fs -put /test/test.db/test_log/create_date=2013-04-09/create_date=2013-04-09.csv.gz