我正在尝试在我存储在Amazon s3上的存储桶中的文本文件上运行Map reduce WordCount作业。我已经为map reduce框架设置了所有必需的身份验证以与Amazon通信,但我继续运行此错误。知道为什么会这样吗?
13/01/20 13:22:15 ERROR security.UserGroupInformation:
PriviledgedActionException as:root
cause:org.apache.hadoop.mapred.InvalidInputException: Input path does
not exist: s3://name-bucket/test.txt
Exception in thread "main"
org.apache.hadoop.mapred.InvalidInputException: Input path does not
exist: s3://name-bucket/test.txt
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:989)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:981)
at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:174)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:897)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:850)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:850)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:824)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1261)
at org.myorg.WordCount.main(WordCount.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
答案 0 :(得分:7)
您实际上必须用s3
替换协议s3n
。这些是具有不同属性的2个不同文件系统:
(source)
在您的情况下,您的存储桶可能正在使用s3n
文件系统,我相信这是默认设置,我使用的大多数存储桶也是s3n
。所以你应该使用s3n://name-bucket/test.txt