将分区添加到配置单元表时访问错误

时间:2018-10-15 18:53:51

标签: hive hdfs partition

我正在cloudera VM上进行培训,并创建了一个外部表。 添加分区时,将引发权限异常。

配置单元尝试将文件存储在hdfs的根目录中是否正常? 我应该/如何改变这种行为? 否则,我需要怎么做才能解决该问题?我尚未在文档中找到该错误。

hive> use question3;
OK
Time taken: 0.016 seconds
hive> drop table if exists question3;
OK
Time taken: 0.162 seconds
hive> create external table question3 (
    >   month_name string
    > , day string
    > , time string
    > , node string
    > , process string
    > , log_msg string
    > )
    > partitioned by (year int, month int)
    > row format serde 'org.apache.hadoop.hive.serde2.RegexSerDe'
    > with serdeproperties (
    >   "input.regex" = "^(\\S+)\\s+(\\S+)\\s+(\\S+:\\S+:\\S+)\\s+(\\S+)\\s+(\\S+)\\s+(.*$)"
    > , "output.format.string" = "%1$s %2$s %3$s %4$s %5$s"
    > )
    > stored as textfile
    > location 'hdfs:/user/hive/question3';
OK
Time taken: 0.178 seconds
hive> alter table question3 add if not exists partition (year=2016, month=06)
    > location "/home/tobi/question3/question3_jul16.log";
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:java.io.IOException: Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
    at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)

0 个答案:

没有答案