使用Tez的Hive:无法从链中的任何提供程序加载AWS凭据

时间:2017-03-11 23:54:37

标签: hadoop amazon-s3 hive

环境:Hadoop 2.7.3,hive-2.2.0-SNAPSHOT,Tez 0.8.4

我的core-site.xml:

 <property>  
 <name>fs.s3a.aws.credentials.provider</name>  
 <value>  
 org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider,   
 org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider,  
 com.amazonaws.auth.EnvironmentVariableCredentialsProvider  
</value>  
<property>  
<name>fs.s3a.impl</name>  
<value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>  
<description></description>  
</property>  
<property>  
<name>fs.s3a.access.key</name>  
<value>GOODKEYVALUE</value>  
<description>AWS access key ID. Omit for Role-based authentication.       </description>  
</property>  
<property>  
<name>fs.s3a.secret.key</name>  
<value>SECRETKEYVALUE</value>  
<description>AWS secret key. Omit for Role-based authentication.</description>  
</property>

我可以从hadoop命令行正确访问s3a uri。我可以创建外部表和命令,如:

create external table mytable(a string, b string) location 's3a://mybucket/myfolder/';  
select * from mytable limit 20;

正确执行,但

select count(*) from mytable; 

失败了:

Error: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1489267689011_0001_1_00, diagnostics=[Vertex vertex_1489267689011_0001_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: url_sum_master initializer failed, vertex=vertex_1489267689011_0001_1_00 [Map 1], com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain
        at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:131)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1110)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:759)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:723)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
        at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
        at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
        at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:4949)
        at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:4923)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4178)
        at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
        at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1313)
        at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1270)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:365)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:483)
        at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1489267689011_0001_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1489267689011_0001_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1
        at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:393)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:250)
        at org.apache.hive.service.cli.operation.SQLOperation.access$800(SQLOperation.java:91)
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:353)

我能让它工作的唯一方法是在uri本身使用accesskey:secretkey,这对于生产代码是不可能的。

感谢。

2 个答案:

答案 0 :(得分:0)

你是对的,你不想在URI中拥有秘密。不久Hadoop将告诉你这样做,在某些时候它可能会完全阻止它。

查看latest s3a docs的问题排查S3a部分。

如果您自己构建Hadoop(您的SDK版本选择意味着),那么构建Hadoop 2.8 / 2.9并在s3a包中启动调试。那里有更多的安全记录,但仍然需要记录比你更少的记录,以保密这些密钥。

您还可以尝试在目标计算机上设置AWS环境变量。这并没有解决问题,但它可以帮助隔离它。

答案 1 :(得分:0)

我通过恢复到Hive2.1.1解决了这个问题。

我认为问题是jar版本不兼容。我的hadoop-aws-2.7.3.jar是使用aws-java-sdk-1.11.93编译的 - 而Hive则使用aws 1.7.4编译了一个版本。