无法访问Hive内部表 - AccessControlException

时间:2017-05-18 12:03:57

标签: hadoop hive mapr

我的用户ID和我的团队无法访问hive db中的任何内部表。当我们在HUE和' CLI'中启动查询时同样,我们正在

  

' AccessControlException',请找到下面的日志,

    INFO  : set mapreduce.job.reduces=<number> INFO  : Cleaning up the staging area maprfs:/var/mapr/cluster/yarn/rm/staging/keswara/.staging/job_1494760161412_0139 

ERROR : Job Submission failed with exception org.apache.hadoop.security.AccessControlException
  (User keswara(user id 1802830393)  does not have access to 
   maprfs:///user/hive/warehouse/bistore_sit.db/wt_consumer/d_partition_number=0/000114_0)'
     org.apache.hadoop.security.AccessControlException: User keswara(user id 1802830393)  does not have access to maprfs:///user/hive/warehouse/bistore_sit.db/wt_consumer/d_partition_number=0/000114_0   
     at com.mapr.fs.MapRFileSystem.getMapRFileStatus(MapRFileSystem.java:1320)   
     at com.mapr.fs.MapRFileSystem.getFileStatus(MapRFileSystem.java:942)    
     at org.apache.hadoop.fs.FileSystem.getFileBlockLocations(FileSystem.java:741)  
     at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1762)   
     at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1747)      at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:307)      at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)      at org.apache.hadoop.hive.shims.Hadoop23Shims$1.listStatus(Hadoop23Shims.java:148)      at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:218)      at org.apache.hadoop.mapred.lib.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:75)      at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getSplits(HadoopShimsSecure.java:310)      at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getCombineSplits(CombineHiveInputFormat.java:472)      at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:573)      at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:331)      at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:323)      at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199)      at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)   
   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)   
  at java.security.AccessController.doPrivileged(Native Method)   
   at javax.security.auth.Subject.doAs(Subject.java:421)    
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)      at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)   
  at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)    
  at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)  
    at java.security.AccessController.doPrivileged(Native Method)   
   at javax.security.auth.Subject.doAs(Subject.java:421)   
   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)     
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)      at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)      at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)      at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)      at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)     
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)      at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:75)

任何用户现在都无法访问内部表格,也是mapr组和sudo用户的一部分!

并且表和分区所有权属于mapr组,但权限看起来不错!

[mapr@SAN2LPMR03 mapr]$ hadoop fs -ls /user/hive/warehouse/bistore.db/wt_consumer
Found 1 items
drwxrwxrwt - mapr mapr 1 2017-03-24 11:51 /user/hive/warehouse/bistore.db/wt_consumer/d_partition_number=__HIVE_DEFAULT_PARTITION__

请帮我解决这个问题!真的很感谢你的帮助!

2 个答案:

答案 0 :(得分:1)

如果表格为parquet格式,则该表格的文件只能为创建该表格的用户提供写入权限。

为此,您可以使用如下所示的语句更改该文件的用户权限

hdfs dfs -chomd 777 /user/hive/warehouse/bistore_sit.db/wt_con‌​sumer/d_partitio‌​n_nu‌​mber=0/000114_‌​0/*

此语句将授予所有用户对该特定文件的所有权限。

在测试CSVparquet格式的某些表时,我注意到以下内容。

当您以CSV格式创建配置单元表时,对于有权访问您所属组的所有用户,该表将具有777权限。

但是当以镶木地板格式创建配置单元表时,只有创建该表的用户才具有写访问权限。我认为它必须采用镶木地板格式

答案 1 :(得分:0)

[root @ psnode44 hive-2.1] #hadoop fs -ls / user / hive / warehouse /

找到1项 drwxrw-rw- - mapr mapr 2 2017-06-28 12:49 / user / hive / warehouse / test

0:jdbc:hive2://10.20.30.44:10000 /&gt; select * from test;

错误:java.io.IOException:org.apache.hadoop.security.AccessControlException:用户basa(用户ID 5005)无权访问maprfs:/ user / hive / warehouse / test(state =,code = 0 )

[root @ psnode44 hive-2.1] #hadoop fs -ls / user / hive / warehouse /

找到1项 drwxrwxrwx - mapr mapr 2 2017-06-28 12:49 / user / hive / warehouse / test

即便想到,我更改了仓库中的chmod,仍然会出现同样的错误。

[root @ psnode44 hive-2.1] #hadoop fs -chmod -R 777 / user / hive / warehouse /

[root @ psnode44 hive-2.1] #hadoop fs -ls / user / hive / warehouse /

找到1项 drwxrwxrwx - mapr mapr 2 2017-06-28 12:49 / user / hive / warehouse / test

0:jdbc:hive2://10.20.30.44:10000 /&gt; select * from test;

错误:java.io.IOException:org.apache.hadoop.security.AccessControlException:用户basa(用户ID 5005)无权访问maprfs:/ user / hive / warehouse / test(state =,code = 0 )