连接到Kerberrized HDFS,java.lang.IllegalArgumentException:无法指定服务器的Kerberos主体名称;

时间:2016-02-10 20:55:16

标签: java hadoop kerberos cloudera keytab

我正在尝试使用以下代码连接到Kerberized hdfs集群,使用相同的代码,我可以使用HBaseConfiguration访问hbase ofcourse,

Configuration config = new Configuration();
config.set("hadoop.security.authentication", "Kerberos");

UserGroupInformation.setConfiguration(config);
UserGroupInformation ugi = null;
ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI("me@EXAMPLE>COM","me.keytab");
model = ugi.doAs((PrivilegedExceptionAction<Map<String,Object>>) () -> { 
  testHadoop(hcb.gethDFSConfigBean());
  return null;
});

我已经能够使用相同的keytab和principal成功访问Solr,Impala,我感到很奇怪找不到hdfs的服务名称。

请查看下面的堆栈跟踪

java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "Securonix-int3.local/10.0.4.36"; destination host is: "sobd189.securonix.com":8020; 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1988)
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
    at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
    at com.securonix.application.ui.uiUtil.SnyperUIUtil.lambda$main$4(SnyperUIUtil.java:1226)
    at com.securonix.application.ui.uiUtil.SnyperUIUtil$$Lambda$6/1620890840.run(Unknown Source)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at com.securonix.application.ui.uiUtil.SnyperUIUtil.main(SnyperUIUtil.java:1216)
Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
    at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:680)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 23 more
Caused by: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
    at org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322)
    at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231)
    at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159)
    at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396)
    at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
    at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
    at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)

在我启用Kerberos的调试代码后,当我调用FileSystem.get()时,我得到了以下调试日志; Kerberor调试日志:

Java配置名称:null Java配置名称:null     本机配置名称:/etc/krb5.conf本机配置名称:/etc/krb5.conf     从本机配置加载从本机配置加载     16/02/22 15:53:14 WARN util.NativeCodeLoader:无法为您的平台加载native-hadoop库...在适用的情况下使用builtin-java类     Java配置名称:null Java配置名称:null     本机配置名称:/etc/krb5.conf本机配置名称:/etc/krb5.conf     从本机配置加载从本机配置

加载
  
    
      

KdcAccessibility:reset&gt;&gt;&gt; KdcAccessibility:重置       KdcAccessibility:reset&gt;&gt;&gt; KdcAccessibility:重置       KeyTabInputStream,readName():EXAMPLE.COM&gt;&gt;&gt; KeyTabInputStream,readName():EXAMPLE.COM       KeyTabInputStream,readName():securonix&gt;&gt;&gt; KeyTabInputStream,readName():securonix       KeyTab:load()条目长度:55;类型:23&gt;&gt;&gt; KeyTab:load()条目长度:55;类型:23       KeyTabInputStream,readName():EXAMPLE.COM&gt;&gt;&gt; KeyTabInputStream,readName():EXAMPLE.COM       KeyTabInputStream,readName():securonix&gt;&gt;&gt; KeyTabInputStream,readName():securonix       KeyTab:load()条目长度:71;类型:18&gt;&gt;&gt; KeyTab:load()条目长度:71;类型:18           寻找钥匙:securonix@EXAMPLE.COM寻找钥匙:securonix@EXAMPLE.COM           增加密钥:18version:1增加密钥:18version:1           增加密钥:23version:1增加密钥:23version:1           寻找钥匙:securonix@EXAMPLE.COM寻找钥匙:securonix@EXAMPLE.COM           增加密钥:18version:1增加密钥:18version:1           增加密钥:23version:1增加密钥:23version:1           default_tkt_enctypes的默认etypes:18 18 16. default_tkt_enctypes的默认etypes:18 18 16。       KrbAsReq创建消息&gt;&gt;&gt; KrbAsReq创建消息       KrbKdcReq发送:kdc = sobd189.securonix.com TCP:88,超时= 30000,重试次数= 3,#bytes = 139&gt;&gt;&gt; KrbKdcReq发送:kdc = sobd189.securonix.com TCP:88,超时= 30000,重试次数= 3,#bytes = 139       KDCCommunication:kdc = sobd189.securonix.com TCP:88,timeout = 30000,Attempt = 1,#bytes = 139&gt;&gt;&gt; KDCCommunication:kdc = sobd189.securonix.com TCP:88,timeout = 30000,Attempt = 1,#bytes = 139       DEBUG:TCPClient读取639字节&gt;&gt;&gt; DEBUG:TCPClient读取639字节       KrbKdcReq发送:#bytes read = 639&gt;&gt;&gt; KrbKdcReq发送:#bytes read = 639       KdcAccessibility:删除sobd189.securonix.com&gt;&gt;&gt; KdcAccessibility:删除sobd189.securonix.com           寻找钥匙:securonix@EXAMPLE.COM寻找钥匙:securonix@EXAMPLE.COM           增加密钥:18version:1增加密钥:18version:1           增加密钥:23version:1增加密钥:23version:1       EType:sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType&gt;&gt;&gt; EType:sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType       KrbAsReq.getReply securonix中的KrbAsRep缺点

    
  

有趣的是,当我使用像hdfs.exists()

这样的文件系统的api时
 >>>KinitOptions cache name is /tmp/krb5cc_501
 >> Acquire default native Credentials
 default etypes for default_tkt_enctypes: 18 18 16.
 >>> Found no TGT's in LSA

2 个答案:

答案 0 :(得分:2)

我认为问题在于HDFS希望Configuration有一个dfs.datanode.kerberos.principal的值,这是datanode的主体,在这种情况下它是缺失的。

当我从core-site.xml创建一个Configuration实例并忘记添加hdfs-site.xml时,我遇到了同样的问题。一旦我添加了hdfs-site.xml,它就开始工作了,hdfs-site.xml有了:

 <property>
      <name>dfs.datanode.kerberos.principal</name>
      <value>....</value>
 </property>

希望这有帮助。

答案 1 :(得分:0)

使用Isilon / OneFS代替HDFS作为存储时,Spark2和HDP3.1遇到了相同的问题。

OneFS服务管理包未提供Spark2期望的某些HDFS参数的配置(Ambari中根本不提供),例如dfs.datanode.kerberos.principal。如果没有这些参数,Spark2 HistoryServer可能无法启动并报告错误,例如“无法指定服务器的主体名称”。

我在自定义hdfs-site下向OneFS添加了以下属性:

dfs.datanode.kerberos.principal=hdfs/_HOST@<MY REALM>
dfs.datanode.keytab.file=/etc/security/keytabs/hdfs.service.keytab
dfs.namenode.kerberos.principal=hdfs/_HOST@<MY REALM>
dfs.namenode.keytab.file=/etc/security/keytabs/hdfs.service.keytab 

这解决了最初的错误。此后,我得到以下形式的错误:

Server has invalid Kerberos principal: hdfs/<isilon>.my.realm.com@my.realm.com, expecting: hdfs/somewhere.else.entirely@my.realm.com

这与跨域身份验证有关。通过将以下设置添加到自定义hdfs-site中来解决此问题:

dfs.namenode.kerberos.principal.pattern=*