在Kerberized Cluster中的Spark Application中读取HDFS文件

时间:2016-12-02 12:24:07

标签: hadoop apache-spark hdfs kerberos keytab

我使用Hortonworks Data Platform 2.5设置了Hadoop集群,其中还包括Ambari 2.4,Kerberos,Spark 1.6.2和HDFS。

我有例如以下用户的Kerberos主体和密钥表:

  • spark(由Kerarios启用时由Ambari创建)
  • hdfsuserA(由kadmin创建 - > add_principle)

在安全集群中运行spark命令需要用户spark-submit,并且Spark应用程序必须在HDFS目录/user/hdfsuserA/...中打开一些文件,该目录由hdfsuserA(700)拥有)。

由于我启用了Kerberos,我的Spark应用程序不再运行,它失败并出现以下异常

[Stage 1:>     (0 + 92) / 162]Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 55 in stage 1.0 failed 4 times, most recent failure: Lost task 55.3 in stage 1.0 (TID 225, had-data1): org.apache.hadoop.security.AccessControlException: Permission denied: user=spark, access=EXECUTE, inode="/user/hdfsuserA/new/data/Export_PDM_Hadoop_05_2016.csv":hdfsuserA:hadoop:drwx------
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:259)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:205)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1785)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1862)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1831)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1744)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

问题是,我通过用户spark进行身份验证,以便能够启动Spark应用程序,但在应用程序内部,由于火花无法访问/user/hdfsuserA HDFS目录,因此出现异常用户。

当我用用户hdfsuserA运行spark-submit命令时,我得到:

[hdfsuserA@had-job ~]$ kinit -kt /etc/security/keytabs/hdfsuserA.keytab hdfsuserA

[hdfsuserA@had-job ~]$ spark-submit --class spark.sales.TestAnalysis --master yarn --deploy-mode client /home/hdfsuserA/application_new.jar hdfs://had-job:8020/user/hdfsuserA/new/data/*
16/12/03 09:44:46 INFO Remoting: Starting remoting
16/12/03 09:44:46 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@141.79.71.34:46996]
spark.yarn.driver.memoryOverhead is set but does not apply in client mode.
spark.driver.cores is set but does not apply in client mode.
16/12/03 09:44:49 INFO metastore: Trying to connect to metastore with URI thrift://had-job:9083
16/12/03 09:44:49 INFO metastore: Connected to metastore.
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:122)
        at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
        at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
        at myutil.SparkContextFactory.createSparkContext(SparkContextFactory.java:34)
        at spark.sales.BasketBasedSalesAnalysis.main(BasketBasedSalesAnalysis.java:46)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

此问题的正确解决方案是什么?我可以吗kinit适用于应用内的其他用户?

1 个答案:

答案 0 :(得分:1)

我发现了问题:这是一个用户问题!由于我只在运行hdfsuserA命令的群集的NameNode主机上创建spark-submit,因此应用程序无法通过其他主机上的keytabs以此用户身份进行身份验证。

所以要解决此问题:在群集的所有主机上添加相同的用户:

sudo useradd hdfsuserA
sudo passwd hdfsuserA

之后调用spark应用程序(master yarn中的spark-submit参数,master local[x]始终有效)!