无法使用自定义可执行文件运行EMR Hadoop Streaming作业

时间:2013-04-10 18:39:48

标签: hadoop amazon-web-services hadoop-streaming amazon-emr emr

修改:

查看namenode日志,我注意到会定期引发异常。它有用吗?

2013-04-10 19:23:50,613 WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): got exception trying to get groups for user job_201304101854_0005
org.apache.hadoop.util.Shell$ExitCodeException: id: job_201304101854_0005: No such user

    at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getUnixGroups(ShellBasedUnixGroupsMapping.java:78)
    at org.apache.hadoop.security.ShellBasedUnixGroupsMapping.getGroups(ShellBasedUnixGroupsMapping.java:53)
    at org.apache.hadoop.security.Groups.getGroups(Groups.java:79)
    at org.apache.hadoop.security.UserGroupInformation.getGroupNames(UserGroupInformation.java:1037)
    at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.<init>(FSPermissionChecker.java:50)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5218)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5201)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:2030)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.getFileInfo(NameNode.java:850)
    at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:573)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
2013-04-10 19:23:50,614 INFO org.apache.hadoop.security.ShellBasedUnixGroupsMapping (IPC Server handler 43 on 9000): add job_201304101854_0005 to shell userGroupsCache
2013-04-10 19:23:50,614 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 43 on 9000): No groups available for user job_201304101854_0005
2013-04-10 19:23:55,886 WARN org.apache.hadoop.security.UserGroupInformation (IPC Server handler 46 on 9000): No groups available for user job_201304101854_0005

我们已经制作了自定义二进制文件来进行map和reduce,使用常识“cat file | map | sort | reduce&gt; output”模式测试了它们的正确操作。我们确保静态编译二进制文件以尽可能多地引入依赖项,并且我们还确认二进制文件通过手动将其上传到主服务器来在Amazon的EMR AMI上运行。如果相关,我们选择的语言是Haskell,编译结果是一个简单的本机二进制可执行文件。

采取最简单的案例:

bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
    -input s3n://path/to/input \
    -output s3n://path/to/output \
    -mapper "s3n://path/to/Program map" \
    -reducer "s3n://path/to/Program reduce" 

工作确实开始了,但它在地图0%阶段陷入困境并且不会让步。它不会从那里继续前进,并且没有任何日志似乎表明任何有用的东西。每个地图任务都会因600秒内“未报告”而被杀死。每个映射器显示类似于以下内容的状态,同时显示0%完成:

s3n://path/to/file.csv.gz:0+38175575

计数器部分显示从s3n读取的17.5KB。

如果我们现在将作业修改为以下内容以便进行测试:

bin/hadoop jar contrib/streaming/hadoop-streaming.jar \
    -input s3n://path/to/input \
    -output s3n://path/to/output \
    -mapper s3n://elasticmapreduce/samples/wordcount/wordSplitter.py \
    -reducer aggregate

然后mapper阶段完成100%,但reducer引发异常:

java.io.IOException: exception in uploadSinglePart
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:163)
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.close(MultipartUploadOutputStream.java:219)
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:70)
    at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:96)
    at org.apache.hadoop.mapred.TextOutputFormat$LineRecordWriter.close(TextOutputFormat.java:109)
    at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:475)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:539)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:429)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: exception in putObject
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:128)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:83)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at org.apache.hadoop.fs.s3native.$Proxy3.storeFile(Unknown Source)
    at org.apache.hadoop.fs.s3native.MultipartUploadOutputStream.uploadSinglePart(MultipartUploadOutputStream.java:160)
    ... 12 more
Caused by: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: 8220819721FFE29E, AWS Error Code: AccessDenied, AWS Error Message: Access Denied, S3 Extended Request ID: TekkBZzgaBlK0e8SkoC7bcBsu1w7Nbpy2U7hPCGp5IPrrsqaPTxUg7QQ09xTXRYC
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:619)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:317)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:170)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2943)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1123)
    at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.storeFile(Jets3tNativeFileSystemStore.java:121)
    ... 20 more

令人沮丧的是,例如,在同一类型的EMR集群上运行配置单元似乎没有任何问题在S3上创建新的外部映射表,因此创建文件。

尝试了几个想法后,如果有人能给我们正确的方向让我们的设置正常工作,我将不胜感激。

谢谢你, OA

1 个答案:

答案 0 :(得分:3)

我认为这可能是你的问题:

-mapper "s3n://path/to/Program map"

这个空格很可能会给你带来麻烦。我可能会尝试构建两个单独的二进制文件,一个用于map,一个用于reduce,您可以直接调用而不是传递参数。至少这将有助于你找出问题。

如果不这样,这就像S3许可或mime类型的问题。我会检查您的存储桶上的权限,以验证您用于EMR作业的凭据是否可以访问存储桶。

一旦你确定那里,我会检查二进制文件本身的权限和属性;当S3 mime类型设置不正确时,我遇到了奇怪的问题。例如,这是wordSplitter信息:

$ s3cmd info s3://elasticmapreduce/samples/wordcount/wordSplitter.py
s3://elasticmapreduce/samples/wordcount/wordSplitter.py (object):
File size: 294
Last mod:  Wed, 29 Feb 2012 01:50:25 GMT
MIME type: text/x-python
MD5 sum:   f5b4829658cfbcd5fa5eb32c58163fa8

你的二进制文件可能默认为以某种方式阻碍执行的mime类型。