我正在尝试使用distcp将文件夹从我的本地hadoop集群(cdh4)复制到我的Amazon S3存储桶。
我使用以下命令:
hadoop distcp -log /tmp/distcplog-s3/ hdfs://nameserv1/tmp/data/sampledata s3n://hdfsbackup/
hdfsbackup是我的Amazon S3 Bucket的名称。
DistCp因未知主机异常而失败:
13/05/31 11:22:33 INFO tools.DistCp: srcPaths=[hdfs://nameserv1/tmp/data/sampledata]
13/05/31 11:22:33 INFO tools.DistCp: destPath=s3n://hdfsbackup/
No encryption was performed by peer.
No encryption was performed by peer.
13/05/31 11:22:35 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 54 for hadoopuser on ha-hdfs:nameserv1
13/05/31 11:22:35 INFO security.TokenCache: Got dt for hdfs://nameserv1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameserv1, Ident: (HDFS_DELEGATION_TOKEN token 54 for hadoopuser)
No encryption was performed by peer.
java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfsbackup
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:295)
at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:282)
at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:503)
at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:487)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:130)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:111)
at org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:85)
at org.apache.hadoop.tools.DistCp.setup(DistCp.java:1046)
at org.apache.hadoop.tools.DistCp.copy(DistCp.java:666)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:881)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
Caused by: java.net.UnknownHostException: hdfsbackup
... 14 more
我在所有节点的core-site.xml中配置了AWS ID / Secret。
<!-- Amazon S3 -->
<property>
<name>fs.s3.awsAccessKeyId</name>
<value>MY-ID</value>
</property>
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>MY-SECRET</value>
</property>
<!-- Amazon S3N -->
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>MY-ID</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>MY-SECRET</value>
</property>
我可以使用cp命令从hdfs复制文件,没有任何问题。以下命令成功将hdfs文件夹复制到S3
hadoop fs -cp hdfs://nameserv1/tmp/data/sampledata s3n://hdfsbackup/
我知道有可用的Amazon S3优化的distcp(s3distcp),但我不想使用它,因为它不支持更新/覆盖选项。
答案 0 :(得分:2)
看起来您正在使用Kerberos安全性,不幸的是,如果启用了Kerberos,Map / Reduce作业当前无法访问Amazon S3。您可以在MAPREDUCE-4548。
中查看更多详细信息他们实际上有一个应该修复它的补丁,但目前不是任何Hadoop发行版的一部分,所以如果你有机会从源代码修改和构建Hadoop,那么你应该做什么:
Index: core/org/apache/hadoop/security/SecurityUtil.java
===================================================================
--- core/org/apache/hadoop/security/SecurityUtil.java (révision 1305278)
+++ core/org/apache/hadoop/security/SecurityUtil.java (copie de travail)
@@ -313,6 +313,9 @@
if (authority == null || authority.isEmpty()) {
return null;
}
+ if (uri.getScheme().equals("s3n") || uri.getScheme().equals("s3")) {
+ return null;
+ }
InetSocketAddress addr = NetUtils.createSocketAddr(authority, defPort);
return buildTokenService(addr).toString();
}
这张票是几天前最后更新的,所以希望很快就能正式修补。
更简单的解决方案是禁用Kerberos,但在您的环境中可能无法实现。
我已经看到,如果您的存储桶被命名为域名,您可能会这样做,但我还没有尝试过,即使这样做,这听起来像是黑客。